Adaptive learning systems (ALSs) aim to personalize education by adjusting content and learning pathways in response to learner performance and behavior. This study conducts a comparative evaluation of four widely adopted adaptive learning models: Item Response Theory (IRT), Bayesian Networks (BN), Collaborative Filtering (CF), and Reinforcement Learning (RL). The evaluation integrates conceptual analysis and empirical simulation, using the large-scale EdNet dataset comprising over 131 million learner interactions. Each model was implemented in Python and assessed with standard metrics, including accuracy, precision, recall, and F1-score, with class imbalance addressed through SMOTE. Results show that RL consistently achieves the strongest performance across personalization accuracy, adaptability, and responsiveness to learner feedback, particularly under balanced conditions. BN closely follows, offering robust predictive accuracy alongside interpretability and cognitive modelling. CF shows moderate effectiveness, with improvements under SMOTE but limited adaptability in sparse or dynamic environments. IRT consistently performs lowest, maintaining value primarily in assessment contexts. Based on these findings, the study proposes a hybrid RL–BN framework, combining RL’s dynamic personalization with BN’s interpretability to create transparent, scalable, and pedagogically grounded ALSs. The results contribute evidence-based guidance for educators and developers in selecting and integrating adaptive learning models to meet diverse learner and institutional needs.
Copyrights © 2025