The rapid adoption of generative artificial intelligence in higher education has introduced significant pedagogical opportunities while simultaneously raising critical concerns regarding academic ethics and data security. This study aims to analyze ethical risks and data vulnerabilities associated with the use of generative AI by university students and lecturers, as well as to assess institutional readiness in establishing responsible AI governance. Using an analytical literature study with a descriptive qualitative approach, this research synthesizes empirical and conceptual findings from reputable international publications between 2015 and 2024. The findings indicate that generative AI poses threats to academic integrity through machine-generated plagiarism, reduced critical thinking, and algorithmic bias in learning processes. From a data security perspective, major risks include opaque data-storage policies, potential model memorization of sensitive information, and weak cybersecurity infrastructures in universities. Institutional readiness remains limited, marked by the absence of AI ethics guidelines, low AI literacy among academic communities, and inadequate monitoring mechanisms. This study recommends the development of generative-AI ethical guidelines, enhancement of digital literacy, improvement of data protection standards, and the establishment of AI governance committees within universities.
Copyrights © 2025