Claim Missing Document
Check
Articles

Found 2 Documents
Search

Digital Balance in the AI Era: A Life-Course Perspective on AI Interaction, Digital Well-Being, and Academic Performance among Engineering Students Fauziyah Alfathyah; Nur Aisyah Fadliyah Faizal; Andi Dio Nurul Awalia; Andi Baso Kaswar; M. Miftach Fakhri
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 1 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/aillce.v1i1.1

Abstract

Purpose – The increasing integration of artificial intelligence (AI) in higher education offers substantial benefits for learning efficiency and personalization, yet it also raises concerns regarding digital ethics, learner autonomy, and digital well-being. From a life-course education perspective, early adulthood represents a critical transitional stage in which patterns of AI interaction may shape long-term learning habits and readiness for lifelong learning. However, empirical evidence examining how multidimensional AI interactions influence academic outcomes through psychological mechanisms remains limited, particularly in developing country contexts. This study investigates the effects of cognitive, affective, and social-ethical interactions with AI on academic performance among Indonesian engineering students, with digital well-being positioned as a mediating mechanism.Design/methods/approach – A quantitative cross-sectional survey was conducted with 103 engineering students from multiple universities, and the data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM).Findings – The findings indicate that cognitive interaction with AI significantly enhances academic performance, while affective interaction primarily contributes to digital well-being. Notably, higher levels of digital well-being are associated with reduced academic performance, suggesting a paradox in which increased comfort and convenience from AI may weaken sustained cognitive engagement. Digital well-being significantly mediates the relationship between affective interaction and academic performance, revealing potential risks of emotional overreliance on AI.Research implications/limitations – These results highlight the importance of balanced and self-regulated AI use in higher education and underscore the need to design AI-supported learning environments that foster cognitive engagement while sustaining digital well-being. From a life-course perspective, the findings suggest that AI interaction patterns formed during early adulthood may have implications for lifelong learning autonomy and educational sustainability.Originality/value – This study provides empirical evidence on multidimensional AI interaction in higher education from a life-course perspective and emphasizes the importance of ethical and responsible AI integration to safeguard academic performance and student well-being.
Comparison of Dataset Proportions in SVM and Random Forest Algorithms in Detecting Student Dependence on AI in Learning Sardar Faroq Ahmd Khan; Pramudya Asoka Syukur; Andi Baso Kaswar; Marwan Ramdhany Edy
Artificial Intelligence in Educational Decision Sciences Vol 1 No 1 (2026): Artificial Intelligence in Educational Decision Sciences
Publisher : PT. Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/aieds.v1i1.6

Abstract

Purpose – The rapid integration of artificial intelligence (AI) in education has raised concerns about excessive student dependence, potentially undermining critical thinking and learning autonomy. This study aims to identify the most effective machine learning algorithm for detecting AI dependency in learning activities and to examine the impact of training–testing data proportion on predictive performance.Methods - This study employs the CRISP-DM framework and applies two supervised classification algorithms, Random Forest and Support Vector Machine (SVM), to a synthetic dataset of 10,000 AI-assisted learning sessions. The target variable, perceived AI assistance level, was discretised into three categories (low, medium, and high). Model performance was evaluated under four dataset split scenarios (60:40, 70:30, 80:20, and 90:10) using accuracy, AUC, precision, recall, and F1-score.Findings - The results show that Random Forest consistently outperforms SVM across all dataset proportions and evaluation metrics. The highest performance was achieved by Random Forest with a 60:40 split, yielding an accuracy of 67.6% and an AUC of 80.8%. Although SVM demonstrated stable performance, it required larger training datasets and remained inferior to Random Forest.Research limitations - The use of synthetic data and limited behavioural features restricts the generalisability of the findings. The moderate accuracy indicates that AI dependency is a complex construct not fully captured by the current model. Originality - This study provides empirical evidence on the combined influence of algorithm selection and dataset proportion in detecting AI dependency, offering practical guidance for developing early-warning systems to support responsible AI use in education.