This research addresses the challenge of detecting emotional expressions in children with special needs, who often rely on nonverbal communication due to difficulties in verbal expression. Traditional emotion detection methods struggle to accurately recognize subtle emotions in these children, which can lead to communication barriers in educational and therapeutic settings. This study proposes the use of the Yolov4-Tiny model, a lightweight and efficient object detection architecture, to accurately detect four key facial expressions: Angry, Happy, Smile, and Afraid. The dataset consists of 1500 images, evenly distributed across the four expression classes, captured under controlled conditions. The model was evaluated using various metrics, including Confidence, Precision, Recall, F1-Score, and Mean Average Precision (mAP), across different training-to-testing data splits. The results demonstrated that the Yolov4-Tiny model achieved high accuracy, with a perfect mAP of 100% for balanced and slightly imbalanced splits, and a minimum mAP of 93.1% for more imbalanced splits. This high level of performance highlights the model's robustness and potential for application in educational and therapeutic environments, where understanding emotional expressions is critical for providing tailored support to children with special needs. The proposed system offers a significant improvement over traditional methods, enhancing communication and emotional support for this vulnerable population.
                        
                        
                        
                        
                            
                                Copyrights © 2024