Student engagement in online learning is an important factor that can affect learning outcomes. One indicator of engagement is facial expression. However, research on facial expression detection in online learning environments is still limited, especially in the use of the YOLOv8 algorithm. This study aims to compare the performance of several YOLOv8 variants, namely YOLOv8x, YOLOv8m, YOLOv8s, YOLOv8n, and YOLOv8l in recognizing six facial expressions: happy, sad, angry, surprised, afraid, and neutral. Student facial expression data was collected through the Moodle platform every 15 seconds during the learning process. All models were trained using 640x640 pixel images for 100 epochs to improve facial expression detection capabilities. The main contribution of this study is to provide a comprehensive analysis of the effectiveness of YOLOv8 in detecting student facial expressions, which can be used to improve the online learning experience. The evaluation results show that the YOLOv8s model has the best performance with the highest mAP of 0.840 and the fastest inference speed of 2.4 ms per image. YOLOv8m and YOLOv8x also performed well with mAP of 0.816 and 0.815, respectively. Although YOLOv8x had the slowest inference speed, it was superior in detecting fear, happiness, and sadness expressions with mAP above 0.9. YOLOv8n had mAP of 0.636, while YOLOv8l achieved mAP of 0.813 with an inference speed of 9.1 ms per image. This study shows that the YOLOv8 algorithm, especially YOLOv8s, can be an effective solution to analyze student engagement based on their facial expressions during online learning.
Copyrights © 2025