This study proposes a novel, cognitively-informed Explainable Artificial Intelligence (XAI) framework aimed at enhancing student engagement and promoting transparent, equitable assessment in digital education. By embedding cognitive science principles into interpretable AI models, the framework aligns with diverse cognitive styles and learning trajectories, offering a human-centered approach to educational data analytics. Utilizing secondary analysis of large-scale datasets, specifically the Open University Learning Analytics Dataset (OULAD), the model combines machine learning with cognitive modeling, incorporating attention mechanisms and interpretable neural networks for real-time feedback and decision transparency. Embedded within an adaptive learning environment, the system achieved 85.6% alignment with historical engagement labels and demonstrated a 12% improvement in early detection of at-risk learners. Beyond predictive accuracy, the model offers actionable insights by revealing decision pathways, and empowering educators to implement fairer grading practices and targeted interventions. This work contributes a scalable, ethical, and domain-independent AI solution that can be adapted across STEM and non-STEM curricula. It lays the groundwork for next-generation intelligent tutoring systems, learning management platforms and education policy frameworks centered on explainability and fairness. Future research will expand the model to integrate multimodal inputs (EEG, eye tracking) and investigate long-term learning retention, reinforcing the role of cognitively informed XAI in advancing inclusive, data-driven education systems.
Copyrights © 2025