cover
Contact Name
M. Miftach Fakhri
Contact Email
fakhri.miftach@gmail.com
Phone
+6281774932845
Journal Mail Official
jaaie@abcollab.id
Editorial Address
Jalan Cempaka Mekar Raya No. 10 Bandung, Jawa Barat, Indonesia
Location
Kota bandung,
Jawa barat
INDONESIA
Journal of Applied Artificial Intelligence in Education
ISSN : -     EISSN : 31097081     DOI : https://doi.org/10.66053/jaaie
Core Subject : Science, Education,
Applied AI in Classroom Practice, exploring practical classroom implementations such as smart content delivery, AI-powered virtual assistants, and automated learning support tools. Intelligent Tutoring Systems, focusing on adaptive AI-driven systems that personalize instruction based on individual learner characteristics and performance. AI-Based Assessment and Feedback, examining automated grading, formative assessment mechanisms, and intelligent feedback systems. Learning Analytics and Educational Data Mining, investigating AI-driven analysis of student behaviors, prediction of learning outcomes, and optimization of pedagogical strategies. Adaptive and Personalized Learning Environments, designing systems that dynamically adjust learning pathways based on real-time interaction and learner progress. Natural Language Processing in Education, including automated writing evaluation, language learning applications, and conversational agents for instructional support. AI for Inclusive and Accessible Education, leveraging AI technologies to assist diverse learners, including individuals with disabilities and those in underserved communities. Ethics and Governance of AI in Education, addressing fairness, transparency, accountability, data security, and responsible AI deployment within educational settings.
Articles 10 Documents
Exploring Factors Influencing MOOCs Usage Behavior and Technology Acceptance in Higher Education: An Analysis Using the UTAUT Model Kiki Awaliyah; Arum Putri Rahayu; Putri Olivia; Muh Ma’ruf Asya Perdana
Journal of Applied Artificial Intelligence in Education Vol 1, No 1 (2025): July 2025
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i1.1

Abstract

Indonesian higher education still faces uneven adoption and low completion that may be driven by students’ technology acceptance and available support. This study investigated key determinants of Universitas Negeri Makassar students’ MOOCs acceptance and usage behavior using the Unified Theory of Acceptance and Use of Technology (UTAUT). A descriptive quantitative, cross-sectional survey was administered via Google Forms to 33 undergraduate students. The instrument comprised 34 Likert-scale items (1–5) measuring eight UTAUT-related dimensions: performance expectancy, effort expectancy, social influence, facilitating conditions, computer self-efficacy, attitude toward technology, behavioral intention, and actual use; data were analyzed using descriptive statistics. Overall perceptions were fairly positive, with most indicator means in the moderate-to-agree range. Performance expectancy (e.g., perceived usefulness and learning improvement) was moderate (means ≈3.52–3.55) and effort expectancy suggested MOOCs were relatively easy to learn (means ≈3.48–3.51). Social influence was weaker (means ≈3.24–3.30), while facilitating conditions were strongest, including system compatibility (mean ≈3.58). Behavioral intention was moderate (e.g., plan to use MOOCs; mean ≈3.55), yet actual use was comparatively lower (means ≈3.21–3.33), indicating an intention–use gap. Strengthening institutional support (infrastructure, guidance, integration with campus systems) and targeted interventions to convert intention into sustained participation may improve MOOCs uptake and completion; overall, the findings support UTAUT’s usefulness for diagnosing adoption barriers in Indonesian university contexts.
Analyzing the Continuance Intention to Use AI News Anchors for Daily Information Needs: An Expectation Confirmation Theory Approach Alyah Rahayu; Andika Isma; Fitra Ramadani
Journal of Applied Artificial Intelligence in Education Vol 1, No 1 (2025): July 2025
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i1.2

Abstract

Artificial intelligence (AI) has begun reshaping news broadcasting through AI-based news anchors that can deliver information efficiently and consistently, yet public acceptance, emotional connection, and accountability for potential errors remain open concerns. This study aimed to analyze users’ continuance intention to use AI news anchors for daily information needs through an Expectation Confirmation Theory, focusing on trust/acceptance, news-delivery quality, and perceived innovation. A quantitative cross-sectional survey was conducted among students aged 18–24 as digitally active users; data were collected via an online Likert-scale questionnaire (15 items across three aspects) and analyzed descriptively to summarize response patterns. The results indicate generally moderate-to-positive evaluations across all aspects: trust and acceptance showed an overall mean of 2.65, news-delivery quality 2.62, and innovation/technology 2.45. At the item level, respondents reported moderate comfort with AI-delivered news (M = 2.51) and moderate belief in accuracy/reliability (M = 2.54); delivery clarity was rated similarly (M = 2.54), while visual appeal showed a relatively stronger influence on viewing interest (M = 2.73). Respondents also expressed interest in AI-related technological advances (M = 2.52) and generally viewed AI news delivery as a positive media direction, while still noting that improvements are needed before AI can fully replace human presenters. These findings imply that media organizations and developers should prioritize more natural and emotionally engaging delivery, strengthen audio-visual realism, and address ethical/regulatory safeguards, concluding that AI news anchors are broadly acceptable to younger audiences but should be positioned as a complement to human presenters rather than a complete substitute.
Student Perceptions of AI in Learning: The Role of Credibility and Emo-tional Well-Being in Supporting Critical Thinking Skills Ummul Khaeri Masna; Arum Putri Rahayu; Sakinah Mawaddah; Nurrahmah Agusnaya; Muh. Yusril Anam
Journal of Applied Artificial Intelligence in Education Vol 1, No 1 (2025): July 2025
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i1.3

Abstract

The growing use of artificial intelligence (AI) tools (e.g., ChatGPT, Grammarly) in higher education is often claimed to enhance students’ critical thinking, yet perceived benefits remain inconsistent and may depend more on trust and affective experience than on technical features alone. This study aimed to examine students’ perceptions of AI for supporting critical thinking by testing five predictors—perceived AI credibility, AI quality, cognitive absorption, emotional well-being, and satisfaction—and their effects on overall AI perception. A quantitative cross-sectional survey was administered to 90 Indonesian university students (purposive sampling; ages 18–25) using 26 closed-ended Likert items (5-point scale) and three open-ended questions; data were analyzed in Jamovi using descriptive statistics, Pearson correlations, and multiple linear regression. The results indicated generally moderate perceptions of AI (item means ≈2.2–2.8), significant positive correlations among all variables (p < .001), and strong explanatory power of the regression model (R² = 0.737; adjusted R² = 0.720). In the multivariate model, emotional well-being (β_std = 0.267, p = 0.016) and AI credibility (β_std = 0.196, p = 0.043) were the only significant predictors, whereas AI quality, cognitive absorption, and satisfaction showed positive but non-significant effects. These findings imply that AI-supported learning interventions should prioritize credible, trustworthy AI outputs and pedagogical designs that promote positive emotional experiences (e.g., comfort, reduced stress, motivation) to strengthen perceived critical-thinking benefits; overall, affective and trust-related factors appear to be central drivers of students’ positive AI perceptions, warranting validation in larger and longitudinal studies
Effects of Artificial Intelligence Integration on Design Mindset, Creativity, and Reflection Khaerul Amri; Intan Novita Kowaas; Andro Ruben Runtu; Saif Mohammed; Rifky Muhajji
Journal of Applied Artificial Intelligence in Education Vol 1, No 1 (2025): July 2025
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i1.4

Abstract

Artificial intelligence (AI) is increasingly embedded in design-based learning because it can accelerate ideation, support rapid iteration, and enable human–AI collaboration; however, a persistent challenge is maintaining an appropriate balance between AI-driven automation and human agency while ensuring that students’ design mindset, creativity, and reflective thinking are genuinely strengthened. This study aimed to examine the perceived effects of AI integration on students’ design mindset, creativity, and critical reflection in higher education. A quantitative cross-sectional design was employed with purposive sampling of 96 university students (predominantly female; mean age ≈20 years) who had used AI tools in learning and design activities; data were collected via an online Likert-scale questionnaire distributed from October to November 2024 and analyzed using descriptive statistics (means and sums). The results indicate that students reported generally moderate-to-positive perceptions of AI’s contribution across all constructs, with overall mean scores suggesting beneficial support for design mindset (M≈2.59) and creativity (M≈2.59), and relatively stronger support for reflection (M≈2.68), particularly in helping students understand their learning/creative processes and learn from mistakes. These findings imply that higher education institutions such as Makassar State University should integrate AI more strategically as a co-creative learning partner, complemented by structured training for both instructors and students to maximize creative and reflective gains while safeguarding human control; overall, AI shows strong potential to enhance design-oriented learning, but deeper implementation and longitudinal evaluation are recommended.
Enhancing Educator Quality and National Education Success: The Roles of Competence, Innovation, and Sustainable Support Indal Awalaikal; Andro Ruben Runtu; Surahmadani; Stephen Amukune
Journal of Applied Artificial Intelligence in Education Vol 1, No 1 (2025): July 2025
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i1.5

Abstract

Persistent disparities in education quality in Indonesia shaped by uneven teacher capacity, limited innovation in technology-enabled pedagogy, and inconsistent long-term support continue to hinder the achievement of national education goals. This study aimed to examine how educator competence, pedagogical innovation, and sustainable support are perceived as key contributors to revitalizing educator quality as a foundation for national education success. A quantitative cross-sectional approach was used, collecting data from 106 undergraduate students across Indonesia through an online questionnaire (Google Forms) using convenience sampling. The instrument consisted of 25 Likert-scale items, covering educator competence (8 items), pedagogical innovation (9 items), and sustainable support (8 items); responses were analyzed descriptively using mean scores, dispersion, and categorical interpretation. The results indicate that participants perceived educator competence as Very Good (M = 1.78; SD = 0.564) and sustainable support as Very Good (M = 1.78; SD = 0.519), while pedagogical innovation was rated Good (M = 1.81; SD = 0.505), suggesting strong perceived readiness in competence and support but relatively slower progress in innovation practices. Respondents were predominantly female (62.3%) and mainly aged 21–23 (56.4%), with more than half in higher semesters (52.8), reflecting perspectives from students with substantial academic exposure. These findings imply that national education improvement requires sustaining competence development and strengthening durable institutional and policy support while accelerating equitable pedagogical innovation—especially effective technology integration in underserved areas. Overall, the study concludes that synergy among competence, innovation, and sustained support is essential for improving educator quality and advancing more inclusive outcomes
Affective Drivers and Ethical Concerns Shaping AI Use Among University Students Nabilah Auliah Rahman; Melda Auliyah Zakina; Aprilianti Nirmala S; Saipul Abbas
Journal of Applied Artificial Intelligence in Education Vol 1, No 2 (2026): January 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i2.6

Abstract

The rapid growth of artificial intelligence (AI) use in higher education raises concerns about how students’ emotional states and the quality of their interactions with AI shape both affective engagement and ethical awareness in academic contexts. This study aims to examine the effects of emotional well-being, AI credibility, and AI interaction quality on students’ ethical awareness, with affective engagement positioned as a mediating mechanism. A quantitative cross-sectional survey was administered to higher education students who use AI tools for academic activities, and the proposed relationships were tested using PLS-based structural modeling with bootstrapping procedures. The findings indicate that emotional well-being (β = 0.549, p < 0.001) and AI interaction quality (β = 0.420, p < 0.001) significantly enhance affective engagement, whereas AI credibility shows no significant effect (β = –0.045, p = 0.342). Affective engagement has a significant positive influence on ethical awareness (β = 0.597, p < 0.001) and significantly mediates the effects of emotional well-being and interaction quality on ethical awareness, while no indirect effect is observed for AI credibility. Overall, these results imply that ethical awareness in student AI use is fostered more strongly through emotionally supportive experiences and high-quality human–AI interactions than through credibility perceptions alone, underscoring the need for human-centered AI integration and ethics-oriented guidance in higher education
Redefining Social Responsibility Through AI Literacy: The Roles of Digital Literacy and Ethical Awareness in Digital Citizenship Misbahuljannah; Riqqah Dhian Shefira; Devi Miftahul Jannah; Muh. Yusril Anam; Rosidah
Journal of Applied Artificial Intelligence in Education Vol 1, No 2 (2026): January 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i2.7

Abstract

The rapid integration of artificial intelligence (AI) into digital learning environments requires higher education students to develop not only technical competence, but also critical, ethical, and socially responsible capacities as digital citizens. This study aims to examine how AI literacy, digital literacy, and ethical awareness influence students’ social responsibility as a key foundation for responsible digital citizenship. A quantitative cross-sectional survey was conducted with 100 undergraduate students in Informatics and Computer Engineering Education, and the hypothesized relationships were tested using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results show that digital literacy has a positive and significant effect on social responsibility (β = 0.397, p = 0.001) and ethical awareness emerges as the strongest positive predictor (β = 0.615, p < 0.001), while AI literacy exhibits a negative but significant effect (β = −0.151, p = 0.022), suggesting that higher AI literacy may foster more critical or cautious orientations that could reduce socially responsible engagement when not accompanied by strong ethical grounding and citizenship-oriented competencies. The findings imply that higher education curricula should integrate digital literacy, AI literacy, and ethics education in a balanced manner moving beyond purely technical training so that AI literacy translates into constructive social responsibility and strengthened digital citizenship; future studies should extend the sample and adopt longitudinal designs to capture behavioral changes over time.
How AI Personalization and Feedback Shape Student Engagement: The Mediating Role of Technology Engagement Ahmad Abdullah Aswad; Tegar Angbirah Parerungan; Elma Nurjannah; Muh. Akbar
Journal of Applied Artificial Intelligence in Education Vol 1, No 2 (2026): January 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i2.8

Abstract

Higher education is rapidly adopting AI-supported learning systems, yet the effectiveness of these tools depends on how students engage with them psychologically, not merely on their availability. However, mere access to AI tools does not automatically translate into meaningful student engagement, indicating a psychological “adoption gap” between technology availability and learners’ active involvement. This study aims to test how key AI features AI usage, personalization/adaptivity, and feedback/analytics relate to student engagement, while examining technology engagement as a mediating mechanism that explains how AI features become educationally effective. Using a quantitative, non-experimental cross-sectional survey of 71 undergraduate students in Eastern Indonesia, the proposed model was analyzed using PLS-SEM (SmartPLS 4) to estimate direct and indirect effects. The model demonstrated strong predictive power, explaining 74.4% of the variance in technology engagement (R² = 0.744) and 66.4% in student engagement (R² = 0.664). AI personalization/adaptivity emerged as the strongest driver, significantly predicting technology engagement (β = 0.516, p < 0.001) and also exerting a significant direct effect on student engagement (β = 0.310, p = 0.010), whereas AI usage and feedback did not show significant direct effects on student engagement but exhibited significant indirect effects through full mediation by technology engagement. These findings imply that technology engagement functions as a “gatekeeper”: institutions should prioritize adaptive personalization and deliberately cultivate students’ sense of control, competence, and psychological involvement with AI systems, rather than relying on high usage intensity or automated feedback alone to drive engagement.
AI Hallucinations in AI-Assisted Educational Decision-Making and Academic Honesty Intentions Among Undergraduates Desitha Cahya; Putri Ramdani; Annajmi Rauf; Andi Baso Kaswar; M Miftach Fakhri
Journal of Applied Artificial Intelligence in Education Vol 1, No 2 (2026): January 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i2.9

Abstract

Artificial Intelligence in Education (AIED) is increasingly used to improve learning efficiency, personalization, and academic productivity. however, persistent risks such as AI hallucinations, algorithmic bias, and limited transparency can undermine the reliability of AI outputs and create ethical vulnerabilities that threaten academic integrity. This study aims to examine how students’ perceptions of algorithmic bias, perceived transparency of AI systems, and digital literacy influence their intentions to behave honestly when using AI for academic purposes. A quantitative cross-sectional survey was administered to 97 undergraduate students with experience using generative AI tools, and the proposed relationships were tested using Partial Least Squares Structural Equation Modeling (PLS-SEM) with SmartPLS 4. The results indicate that algorithmic bias (β = 0.248; t = 2.420; p = 0.008), transparency (β = 0.188; t = 1.920; p < 0.001), and digital literacy (β = 0.499; t = 5.457; p = 0.027) each have positive and significant effects on honest behavior intentions, with digital literacy emerging as the strongest predictor. These findings imply that strengthening students’ digital literacy together with institutional efforts to promote transparent and fairness-aware AI use can reduce unethical practices and foster a more integrity-centered academic environment in AI-assisted learning, while also informing ethical behavior frameworks for AIED implementation in higher education.
Explaining AI Anxiety Among University Students: The Roles of Career Anxiety, Dehumanization, and Algorithmic Fairness Mustamin; Ahmad Syarif Hidayatullah; Putri Nirmala; Akhmad Affandi; Della Fadhilatunisa
Journal of Applied Artificial Intelligence in Education Vol 1, No 2 (2026): January 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v1i2.10

Abstract

Beyond its instructional benefits, AI in higher education can evoke anxiety when students perceive AI as diminishing human uniqueness, disrupting career trajectories, or operating in ways that feel difficult to evaluate or contest. This study aims to examine the effects of career anxiety, dehumanization, and perceived algorithmic fairness on students’ AI anxiety in the context of AI-supported learning. Using an explanatory quantitative survey design, data were collected from 70 university students who actively used AI-based learning tools, and the proposed relationships were tested using PLS-SEM. The results indicate that career anxiety positively predicts AI anxiety (β = 0.234, t = 1.691, p = 0.045) and dehumanization is the strongest predictor (β = 0.415, t = 2.958, p = 0.002), whereas perceived algorithmic fairness is not significant (β = 0.103, t = 0.740, p = 0.230), with the model explaining 48.2% of the variance in AI anxiety (R² = 0.482). These findings imply that AI anxiety is driven more by emotional and identity-related threats than by fairness evaluations, suggesting that institutions should adopt human-centered AI integration, strengthen AI literacy, and provide career-focused and psychological support to reduce student anxiety in AI-supported learning environments.

Page 1 of 1 | Total Record : 10