Automatic facial expression recognition is a significant challenge in human-computer interaction with broad relevance in mental health, security, and behavioral analysis. This study proposes the implementation of Deep Learning using a custom Convolutional Neural Network (CNN) architecture to classify seven basic emotion categories: angry, disgust, fear, happy, sad, surprise, and neutral. Key challenges such as lighting variations and visual feature ambiguity in the FER2013 dataset are addressed through image pre-processing techniques, data augmentation, and the use of Batch Normalization and Dropout layers to prevent overfitting. The research methodology involves a systematic architectural design with three main convolution blocks optimized for computational efficiency. Experimental results show that the proposed model achieved a validation accuracy of 68.2%. Performance analysis based on F1-Score reveals that the "Happy" emotion has the highest detection rate (0.85) due to contrasting facial geometric features, while the "Fear" emotion is the most difficult class to identify (0.41). This study concludes that the use of an optimized standalone CNN architecture provides competitive and efficient performance compared to heavier transfer learning models, making it feasible for implementation on devices with mid-range hardware specifications.
Copyrights © 2026