Claim Missing Document
Check
Articles

Found 11 Documents
Search

Security and Privacy Threats in AI-Driven Education Systems: A Narrative literature review Tandirerung, Veronika Asri
Journal of Embedded Systems, Security and Intelligent Systems Vol 6, No 4 (2025): Desember 2025
Publisher : Program Studi Teknik Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59562/jessi.v6i4.10902

Abstract

The use of Artificial Inteligence (AI) in education systems is increasing. Various uses of AI include learning analytics, intelligent tutoring systems, automated assesment, and biometric analysis. This study examines the cybersecurity and privacy risks associated with the integration of generative AI technologies in adolescent education. The use of this technology poses significant security and privacy risks. Through a narrative literature review, the analysis identifies dominant threat categories, institutional vulnerabilities, and mitigation strategies relevant to K–12 learning environments. The selection framework required that each study (1) addressed generative AI or machine learning used within educational systems, (2) discussed cybersecurity, privacy, or data-protection implications, and (3) focused on adolescents or school-age learners. The findings reveal several major risk clusters, including exposure of minors’ personal and biometric data, model manipulation and prompt-injection attacks, algorithmic and behavioral profiling risks, the dissemination of misinformation, and persistent governance gaps within educational institutions. These risks highlight the urgent need for robust privacy by design implementation, stronger cybersecurity infrastructures, clear institutional AI governance policies, and capacity building among educators. While the narrative nature of this review limits quantitative comparison across studies and may restrict generalizability due to variability in methods and contexts, the synthesis provides important insights to guide safer AI adoption. Future work should explore empirical evaluations of generative-AI security controls, the application of differential privacy in school settings, and the development of standardized AI security frameworks for K–12 institutions. Overall, this review contributes a consolidated understanding of the security challenges emerging from the use of generative AI in adolescent education and offers evidence-based directions for technical, policy, and institutional safeguards.