cover
Contact Name
M. Miftach Fakhri
Contact Email
fakhri.miftach@gmail.com
Phone
+6281774932845
Journal Mail Official
jaaie@abcollab.id
Editorial Address
Jalan Cempaka Mekar Raya No. 10 Bandung, Jawa Barat, Indonesia
Location
Kota bandung,
Jawa barat
INDONESIA
Journal of Applied Artificial Intelligence in Education
ISSN : -     EISSN : 31097081     DOI : https://doi.org/10.66053/jaaie
Core Subject : Science, Education,
Applied AI in Classroom Practice, exploring practical classroom implementations such as smart content delivery, AI-powered virtual assistants, and automated learning support tools. Intelligent Tutoring Systems, focusing on adaptive AI-driven systems that personalize instruction based on individual learner characteristics and performance. AI-Based Assessment and Feedback, examining automated grading, formative assessment mechanisms, and intelligent feedback systems. Learning Analytics and Educational Data Mining, investigating AI-driven analysis of student behaviors, prediction of learning outcomes, and optimization of pedagogical strategies. Adaptive and Personalized Learning Environments, designing systems that dynamically adjust learning pathways based on real-time interaction and learner progress. Natural Language Processing in Education, including automated writing evaluation, language learning applications, and conversational agents for instructional support. AI for Inclusive and Accessible Education, leveraging AI technologies to assist diverse learners, including individuals with disabilities and those in underserved communities. Ethics and Governance of AI in Education, addressing fairness, transparency, accountability, data security, and responsible AI deployment within educational settings.
Articles 12 Documents
Auditable Automated Essay Scoring and Formative Feedback: A Rubric-Grounded Pipeline for Secondary and Higher Education Qi Xin
Journal of Applied Artificial Intelligence in Education Vol 2, No 1 (2026): July 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v2i1.348

Abstract

Automated essay scoring in education is increasingly expected to do more than reproduce human holistic scores; classroom use also demands rubric-aligned feedback, transparent evidence, and a way to route uncertain cases to teachers. In this study, “LLM-ready” refers to a system that outputs structured score evidence, weak-trait signals, and document-level anchors that can later be verbalized by a language model without changing the underlying decision trace. This study aimed to evaluate whether a rubric-grounded, LLM-ready pipeline can achieve competitive scoring accuracy while also generating auditable formative feedback and a teacher-controllable review signal. The evaluation used the public ASAP corpus of 12,976 essays across eight prompts and prompt-wise five-fold cross-validation. Four holistic scorers were compared: length-only, rubric forest, prompt-adaptive centroid regressor (PACR), and the final RG-Score ensemble with trait grounding, isotonic calibration, and audit control. Auxiliary analytic scoring was examined on Prompts 2 and 7–8, and feedback experiments were conducted on all 2,292 essays from Prompts 7 and 8. PACR obtained the highest macro QWK of 0.739, while RG-Score reached 0.738 and provided a calibrated, auditable path to feedback. The prompt-level QWK for RG-Score ranged from 0.66 to 0.82, with particularly strong gains on Prompts 6 and 7. Auxiliary analytic scoring yielded QWK values of 0.623 for Prompt 2 domain2, 0.604 on average for Prompt 7 traits, and 0.506 on average for Prompt 8 traits. The rubric-grounded evidence feedback template achieved a Trait Recall@2 of 0.829, a valid evidence rate of 0.912, and an auditability index of 0.893 on Prompts 7 and 8. These findings support rubric-grounded AES as a practical assessment-support approach for secondary-school writing and as a structured foundation for higher-education formative feedback workflows, while also indicating that weaker trait models should be treated as advisory rather than fully autonomous.
Effects of Artificial Intelligence on Academic Achievement Among Nigerian University Students: A Meta-Analysis (2022–2025) Kayode Sunday John Dada
Journal of Applied Artificial Intelligence in Education Vol 2, No 1 (2026): July 2026
Publisher : Academic Bright Collaboration

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.66053/jaaie.v2i1.359

Abstract

Nigeria’s higher education sector faces persistent challenges, even as artificial intelligence shows growing potential to improve learning outcomes, while prior findings in the Nigerian university context remain fragmented and methodologically inconsistent. This study aimed to quantitatively synthesize empirical evidence on AI’s impact on academic achievement among Nigerian university students, identify moderating variables explaining effect heterogeneity, and document implementation challenges constraining AI adoption in the educational sector. Following PRISMA 2020 guidelines, a systematic search of eight bibliographic databases identified 47 eligible studies published between 2022 and 2025, covering a combined sample of 8,234 undergraduate and postgraduate students from federal and state universities in Nigeria. Random-effects models with restricted maximum likelihood estimation were conducted in R using the metafor package, with Hedges’ g as the primary effect size. Moderator analyses applied mixed-effects models and meta-regression across seven variables, while publication bias was examined using Egger’s regression test and trim-and-fill analysis. The pooled effect was moderate to large (g = 0.68, 95% CI [0.54, 0.82], p < .001), with substantial heterogeneity (I² = 86.5%) indicating important moderator effects. The strongest outcomes were associated with intelligent tutoring systems (g = 0.91), individualized learning strategies (g = 0.79), STEM disciplines (g = 0.84), and interventions lasting more than eight weeks (g = 0.81). Key implementation barriers included poor internet connectivity (91.5%), unreliable electricity supply (87.2%), limited faculty AI competence (89.4%), and financial constraints (85.1%). These findings support evidence-based AI integration policies in Nigerian higher education, particularly in infrastructure development, faculty training, and equitable implementation strategies.

Page 2 of 2 | Total Record : 12