Artificial Intelligence in Lifelong and Life-Course Education
Artificial Intelligence in Lifelong and Life-Course Education (AILLCE) focuses on advancing scholarly understanding of how artificial intelligence (AI) is designed, implemented, and evaluated within educational contexts across the entire lifespan. The journal emphasizes lifelong and life-course perspectives, addressing learning as a continuous process that spans early childhood, formal schooling, higher education, vocational education and training, adult learning, professional development, and later-life education. Its primary focus lies in examining the pedagogical, psychological, technological, and ethical dimensions of AI-supported education in formal, non-formal, and informal learning environments. The journal publishes original research articles, theoretical analyses, methodological studies, and systematic reviews that address, but are not limited to, the following areas: Artificial Intelligence Across the Life-Course AI applications in early childhood education, school education, higher education, vocational and professional education, adult education, and education for ageing populations; life-course transitions and longitudinal perspectives in AI-supported learning. AI-Enhanced Lifelong Learning Systems Adaptive and personalized learning systems, intelligent tutoring systems, learning recommender systems, AI-driven assessment, learning analytics, educational data mining, and lifelong learning pathways supported by AI technologies. Pedagogical, Psychological, and Developmental Perspectives The impact of AI on learning outcomes, motivation, self-regulated learning, academic emotions, cognitive processes, well-being, and learner agency across different developmental stages and educational contexts. AI Literacy, Ethics, and Governance in Education AI literacy and digital competence across the lifespan; ethical, transparent, and trustworthy AI in education; issues of algorithmic bias, fairness, explainability, data privacy, and governance frameworks for AI-enabled educational systems. Emerging Technologies and Innovative Learning Environments Integration of AI with immersive and interactive technologies, including virtual and augmented reality, game-based learning, workplace learning systems, open and community-based education, and informal learning environments. Methodological and Design-Oriented Research Design-based research, design and development research, mixed-methods approaches, longitudinal studies, learning analytics methodologies, and the validation of AI-supported educational models, frameworks, and instruments.
Articles
12 Documents
Digital Balance in the AI Era: A Life-Course Perspective on AI Interaction, Digital Well-Being, and Academic Performance among Engineering Students
Fauziyah Alfathyah;
Nur Aisyah Fadliyah Faizal;
Andi Dio Nurul Awalia;
Andi Baso Kaswar;
M. Miftach Fakhri
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 1 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i1.1
Purpose – The increasing integration of artificial intelligence (AI) in higher education offers substantial benefits for learning efficiency and personalization, yet it also raises concerns regarding digital ethics, learner autonomy, and digital well-being. From a life-course education perspective, early adulthood represents a critical transitional stage in which patterns of AI interaction may shape long-term learning habits and readiness for lifelong learning. However, empirical evidence examining how multidimensional AI interactions influence academic outcomes through psychological mechanisms remains limited, particularly in developing country contexts. This study investigates the effects of cognitive, affective, and social-ethical interactions with AI on academic performance among Indonesian engineering students, with digital well-being positioned as a mediating mechanism.Design/methods/approach – A quantitative cross-sectional survey was conducted with 103 engineering students from multiple universities, and the data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM).Findings – The findings indicate that cognitive interaction with AI significantly enhances academic performance, while affective interaction primarily contributes to digital well-being. Notably, higher levels of digital well-being are associated with reduced academic performance, suggesting a paradox in which increased comfort and convenience from AI may weaken sustained cognitive engagement. Digital well-being significantly mediates the relationship between affective interaction and academic performance, revealing potential risks of emotional overreliance on AI.Research implications/limitations – These results highlight the importance of balanced and self-regulated AI use in higher education and underscore the need to design AI-supported learning environments that foster cognitive engagement while sustaining digital well-being. From a life-course perspective, the findings suggest that AI interaction patterns formed during early adulthood may have implications for lifelong learning autonomy and educational sustainability.Originality/value – This study provides empirical evidence on multidimensional AI interaction in higher education from a life-course perspective and emphasizes the importance of ethical and responsible AI integration to safeguard academic performance and student well-being.
Artificial Intelligence Interaction in Higher Education: A Life-Course Perspective on Digital Well-Being, Learning Outcomes, Motivation, and Ethical Awareness
Ikrananda;
Indah Amaliah;
Annajmi Rauf;
Muh. Yusril Anam;
Irwansyah Suwahyu
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 1 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i1.2
Purpose – The increasing integration of artificial intelligence (AI) in higher education offers significant opportunities to enhance learning effectiveness, yet it also raises concerns related to digital well-being, learner motivation, and ethical awareness. From a life-course education perspective, early adulthood represents a critical transitional phase in which patterns of interaction with AI may shape long-term learning habits and readiness for lifelong learning. However, empirical evidence examining how AI interaction influences learning outcomes through psychological and instructional mechanisms remains limited. This study examines the effects of student interaction with AI on learning outcomes, learning motivation, and ethical awareness, with digital well-being and instructional design quality positioned as mediating variables.Design/methods/approach – A quantitative cross-sectional survey was conducted with 145 undergraduate students at a public university in Indonesia. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) to examine direct and mediating relationships among the proposed constructs.Findings – The results indicate that student interaction with AI has a significant positive effect on digital well-being, instructional design quality, learning motivation, and learning outcomes. Digital well-being and instructional design quality serve as important mediating mechanisms through which AI interaction enhances motivation and academic achievement. However, interaction with AI does not directly improve students’ ethical awareness, suggesting that ethical sensitivity does not emerge automatically through AI use without explicit pedagogical intervention.Research implications/limitations – These findings underscore the importance of designing AI-supported learning environments that promote cognitive engagement, digital well-being, and pedagogical quality while deliberately integrating ethical instruction. The study is limited by its cross-sectional design, single-institution context, and reliance on self-reported data.Originality/value – This study contributes to the literature on artificial intelligence in education by integrating digital well-being and instructional design quality as mediating mechanisms within a life-course framework, offering insights into how AI interaction during early adulthood may influence sustainable and responsible lifelong learning.
AI Dependency and Critical Thinking in Higher Education: A Life-Course Perspective on Ethical Awareness and Algorithmic Bias
Jabal Nur Popalia;
Muh. Al-Habsy;
Muh. Akbar;
Akhmad Affandi
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 1 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i1.3
Purpose – The rapid adoption of artificial intelligence (AI) in higher education has transformed how students engage with learning tasks, raising concerns about dependency, ethical awareness, and algorithmic bias. From a life-course education perspective, early adulthood represents a critical developmental stage in which patterns of AI use may shape long-term critical thinking and lifelong learning dispositions. However, empirical studies integrating AI dependency, ethical awareness, and algorithmic bias awareness in relation to students’ critical thinking remain limited. This study examines the effects of AI dependency, ethical awareness, and algorithmic bias awareness on university students’ critical thinking skills in the context of Indonesian higher education.Design/methods/approach – A quantitative cross-sectional design was employed. Data were collected from 110 undergraduate students across four universities in South Sulawesi, Indonesia, using purposive sampling. A validated questionnaire measured AI dependency, ethical awareness, algorithmic bias awareness, and critical thinking skills. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) with SmartPLS. Findings – The results indicate that all three variables significantly and positively influence students’ critical thinking skills. Algorithmic bias awareness exhibits the strongest effect, followed by AI dependency and ethical awareness. These findings suggest that critical awareness of AI limitations contributes more substantially to critical thinking development than the intensity of AI use alone.Research implications/limitations – The cross-sectional design limits causal interpretation, and the dominance of early-semester STEM students constrains generalizability. Potential moderating factors were not examined. Originality/value – This study contributes to the literature on artificial intelligence in education by integrating ethical awareness and algorithmic bias awareness within a life-course framework, highlighting the central role of critical AI literacy in supporting sustainable critical thinking development in higher education.
AI Chatbot Use in Higher Education: A Life-Course Perspective on Student Engagement and Cognitive Learning Outcomes
Muh. Nurfajri Syam;
Muh Nurul Ainal Hakim;
Della Fadhilatunisa;
Saipul Abbas
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 1 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i1.4
Purpose - The increasing use of artificial intelligence (AI) chatbots in higher education has reshaped how students engage with learning activities and develop cognitive skills. From a life-course education perspective, higher education represents a critical stage in early adulthood where learning experiences may influence long-term learning habits and readiness for lifelong learning. However, empirical studies integrating chatbot usage intensity, AI effectiveness, and student engagement within a single explanatory model remain limited, particularly in developing country contexts. This study examines the effects of AI chatbot usage intensity and perceived AI effectiveness on students’ cognitive learning outcomes, with student engagement positioned as a mediating mechanism.Design/methods/approach - A quantitative cross-sectional survey was conducted involving 88 undergraduate students who had experience using AI chatbots for academic purposes. Data were collected using a validated questionnaire and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) to test direct and indirect relationships among the constructs.Findings - The results indicate that both chatbot usage intensity and AI effectiveness have significant positive effects on cognitive learning outcomes. These variables also significantly enhance student engagement, which in turn positively influences cognitive learning outcomes. Mediation analysis reveals that student engagement significantly mediates the relationship between AI effectiveness and cognitive learning outcomes, but not between chatbot usage intensity and cognitive learning outcomes, highlighting the dominant role of interaction quality over frequency of use.Research implications/limitations - The findings underscore the importance of designing AI-supported learning environments that prioritize pedagogical effectiveness and meaningful engagement rather than mere intensity of use. The cross-sectional design and reliance on self-reported data limit causal inference and generalizability.Originality/value - This study contributes to artificial intelligence in education research by integrating engagement as a mediating mechanism within a life-course framework, offering insights into how AI chatbot use during early adulthood may support sustainable cognitive development and lifelong learning readiness.
Artificial Intelligence Use and Emotional Well-Being in Higher Education: A Life-Course Perspective on Technology Acceptance and Trust
Nailha Dinda Aprilia;
Kartika Ratna Sari;
Putri Nirmala;
Rosidah;
Shera Afidatunisa
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 1 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i1.5
Purpose – The growing integration of artificial intelligence (AI) in higher education has reshaped students’ cognitive and emotional learning experiences. From a life-course education perspective, higher education represents a critical phase of early adulthood in which interactions with AI may influence emotional regulation and readiness for lifelong learning. However, empirical studies examining the affective consequences of AI use through technology acceptance and trust mechanisms remain limited. This study investigates how AI usage frequency, perceived usefulness, perceived ease of use, and trust in AI influence university students’ emotional well-being.Design/methods/approach – A quantitative cross-sectional survey was administered to university students who actively used AI to support their learning activities. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) to examine the direct effects of technology acceptance factors and trust in AI on emotional well-being.Findings – The results indicate that AI usage frequency and trust in AI have significant positive effects on students’ emotional well-being. In contrast, perceived usefulness and perceived ease of use do not directly influence emotional well-being. These findings suggest that affective benefits of AI-supported learning are shaped more by familiarity and psychological trust than by technical efficiency alone.Research implications/limitations – The cross-sectional design, reliance on self-reported measures, and single-institution sample limit causal interpretation and generalizability. Future studies are encouraged to adopt longitudinal or mixed-method approaches to capture emotional dynamics across educational stages.Originality/value – This study extends the Technology Acceptance Model by positioning emotional well-being as a key outcome within a life-course framework, offering insights into how AI interaction during early adulthood may support psychological sustainability and lifelong learning readiness
Learning Autonomy and Effectiveness in AI-Supported Engineering Education Integrating Technology Acceptance and Motivation
Haeril Anwar;
Ismawati;
Nurrahmah Agusnaya;
Andi Akram Nur Risal;
Dary Mochammad Rifqie
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 2 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i2.14
Purpose – This study examines the influence of learning autonomy on learning effectiveness in artificial intelligence supported learning among engineering students by extending the Technology Acceptance Model with motivational and psychological factors.Design/methods/approach – A quantitative cross-sectional survey was conducted involving 90 engineering students from a public university in Indonesia who had experience using artificial intelligence tools for academic learning. Data were analyzed using partial least squares structural equation modeling to examine the relationships among perceived usefulness, self-efficacy, willingness for autonomous learning, and learning effectiveness and autonomy.Findings – The results indicate that perceived usefulness, self-efficacy, and willingness for autonomous learning all have significant positive effects on learning effectiveness and autonomy. Willingness for autonomous learning emerged as the strongest predictor, highlighting the central role of students’ internal motivation and readiness to manage their own learning processes in AI-supported environments.Research implications/limitations – The study is limited by its cross-sectional design, reliance on self-reported data, and a sample restricted to engineering students from a single institution, which may limit generalizability.Originality/value – This study extends the Technology Acceptance Model by integrating learning autonomy and motivational factors within an artificial intelligence supported learning context, offering empirical evidence to inform the design of balanced and student-centered AI-enhanced learning in higher education.
Benefits, Convenience, Ethics, and Anxiety Shaping Indonesian Students’ Intentions to Adopt Generative Artificial Intelligence
Intan Ramadhani Hasbullah;
Andi Imam Ardiansyah;
Elma Nurjannah;
Stephen Amukune
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 2 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i2.15
Purpose – This study examines Indonesian university students’ behavioral intention to adopt generative artificial intelligence by extending the technology acceptance model with ethical concern and artificial intelligence anxiety. It evaluates how perceived usefulness, perceived ease of use, ethical concern, and artificial intelligence anxiety jointly shape adoption intention in higher education.Design/methods/approach – A quantitative cross-sectional survey was administered to 96 active undergraduate students at a public university in Indonesia. The extended model was analyzed using partial least squares structural equation modeling to estimate the predictive power and the significance of structural relationships among constructs.Findings – The structural model explained 64.5% of the variance in behavioral intention. Perceived usefulness was the strongest predictor, followed by ethical concern and perceived ease of use. Artificial intelligence anxiety did not significantly influence behavioral intention, suggesting that functional value and ethical awareness outweighed affective apprehension among experienced users.Research implications/limitations - Institutions should prioritize practical integration and clear ethical guidance for generative artificial intelligence use rather than focusing primarily on reducing anxiety. Generalizability is limited by the cross-sectional design, small sample size, and a sample dominated by science and technology disciplines.Originality/value - This study provides empirical evidence that ethical concern functions as a regulatory facilitator rather than a barrier in generative artificial intelligence acceptance, offering a refined lens for responsible adoption policies in Indonesian higher education.
Ethical Awareness, Perceived Usefulness, and AI Literacy Predict University Students’ Intentions to Use AI Tools
Muhammad Ghazi Saputra;
Elsa Wulandari Tambunan;
Andi Nurhalisa Dwiani;
Devi Miftahul Jannah;
Saif Mohammed
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 2 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i2.16
Purpose – This study examines how ethical awareness and perceived usefulness shape university students’ intentions to use artificial intelligence tools, and whether artificial intelligence literacy mediates these relationships in higher education.Design/methods/approach – A quantitative cross-sectional survey was administered to 85 diploma and undergraduate students with prior experience using artificial intelligence for academic activities. The research model included perceived usefulness, ethical awareness, artificial intelligence literacy, and behavioral intention to use. Data were analyzed using partial least squares structural equation modeling with 5,000 bootstrapping resamples to evaluate measurement quality, test direct effects, and assess mediation.Findings – Perceived usefulness significantly predicts behavioral intention to use artificial intelligence tools and also strengthens artificial intelligence literacy. Ethical awareness significantly increases artificial intelligence literacy but does not directly predict behavioral intention. Artificial intelligence literacy significantly predicts behavioral intention and mediates the effects of both perceived usefulness and ethical awareness on intention. These findings suggest that ethical awareness alone may increase caution unless supported by sufficient literacy that enables students to evaluate benefits, limitations, and risks of artificial intelligence tools.Research implications/limitations – The cross-sectional design, purposive sampling, and a single-institution sample limit causal inference and generalizability. Future studies should use larger and more diverse samples and longitudinal designs.Originality/value – This study provides empirical evidence that artificial intelligence literacy functions as a key mediating mechanism linking ethical awareness and perceived usefulness to artificial intelligence usage intention, informing responsible adoption strategies in higher education.
AI Awareness, Literacy, and Social Influence Predict Ethical Reasoning and Responsible Use in Higher Education
Nurul Febrianti;
Aristia Anastasya Diandra;
Andi Dio Nurul Awalia;
Della Fadhilatunnisa;
M. Miftach Fakhri
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 2 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i2.17
Purpose – This study investigates how AI awareness, AI literacy, and social influence shape students’ AI ethics and, consequently, responsible AI use in higher education.Design/methods/approach – A quantitative cross-sectional survey was conducted with 101 university students in South Sulawesi, Indonesia, who had experience using AI-based learning tools. Data were analyzed using partial least squares structural equation modeling to assess measurement validity and test structural relationships, including the mediating role of AI ethics.Findings – AI awareness and AI literacy have significant positive effects on AI ethics, with AI literacy emerging as the strongest predictor. Social influence shows a significant negative association with AI ethics, indicating that unregulated peer and environmental pressure may encourage AI adoption while weakening ethical sensitivity. AI ethics significantly predicts responsible AI use and mediates the effects of AI awareness, AI literacy, and social influence on responsible use. These results highlight that responsible AI engagement depends not only on cognitive readiness but also on the ethical norms governing how AI is used in academic contexts.Research implications/limitations – The study is limited by its cross-sectional design, self-reported data, and a sample restricted to one region, which may limit causal inference and generalizability.Originality/value – This study provides empirical evidence that AI ethics is a central mechanism linking cognitive and social factors to responsible AI use, informing institutional AI governance, literacy programs, and ethical policy development in higher education.
Academic Dependency, AI Literacy, and Cognitive Offloading Predict Students’ Cognitive Ability in Generative AI Learning
Andini Noviyanti Fitriani;
Rezky Risaldy;
Annajmi Rauf;
Shera Afidatunisa
Artificial Intelligence in Lifelong and Life-Course Education Vol 1 No 2 (2026): Artificial Intelligence in Lifelong and Life-Course Education
Publisher : PT. Academic Bright Collaboration
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.66053/aillce.v1i2.18
Purpose – This study examines the cognitive effects of generative artificial intelligence use in higher education by testing whether academic dependency, AI literacy, and cognitive offloading predict students’ cognitive ability.Design/methods/approach – A quantitative cross-sectional survey was conducted with 93 undergraduate students at Universitas Negeri Makassar who actively use generative AI tools for academic purposes. Data were collected through a structured online questionnaire and analyzed using partial least squares structural equation modeling to evaluate measurement reliability and validity and to test structural relationships among academic dependency, AI literacy, cognitive offloading, and student cognitive ability.Findings – The structural model shows that academic dependency, AI literacy, and cognitive offloading positively and significantly predict student cognitive ability. AI literacy is the strongest predictor, indicating that students’ capacity to understand, evaluate, and use AI outputs critically is central to cognitive development. The findings also suggest that adaptive dependency can function as productive scaffolding, while strategic cognitive offloading may support higher-order thinking by reallocating limited cognitive resources.Research implications/limitations – The cross-sectional design limits causal inference, self-reported measures may introduce bias, and a single-institution context limits generalizability.Originality/value – This study provides integrated empirical evidence on the cognitive impact of generative AI use by jointly modeling academic dependency, AI literacy, and cognitive offloading, informing balanced AI literacy interventions and responsible AI governance in higher education.