Mathematics learning often faces challenges in adjusting the difficulty level of questions to match individual students' abilities. Traditional methods, which apply a uniform difficulty across all students, are less effective as they fail to account for differences in comprehension and learning speed. This study introduces an adaptive learning system utilizing Decision Tree and Reinforcement Learning approaches to dynamically adjust the difficulty of mathematics questions based on real-time student performance. The Decision Tree model classifies questions into easy, moderate, and difficult categories by analyzing the distribution of correct and incorrect student answers, achieving a classification accuracy of 71.33% and an F1-score of 80.02%. Reinforcement Learning, particularly the Q-Learning algorithm, adjusts the difficulty level of subsequent questions based on continuous student performance feedback, with a success rate of 65.96% and a total reward of 626,885. This dual approach significantly enhances the learning process by providing personalized and adaptive experiences, ensuring each student is challenged at an appropriate level. Implemented as a web-based system, it facilitates real-time adjustments and continuous adaptation to student needs. By continuously analyzing student responses, the system maintains engagement and supports effective mastery of mathematical concepts. This personalized feedback mechanism fosters a dynamic and interactive learning environment that is more responsive to individual needs, improving both student engagement and conceptual understanding.
Copyrights © 2025