Facial emotion recognition is an important research area in computer vision and artificial intelligence, with applications in human–computer interaction, affective computing, and intelligent systems. This study aims to evaluate the performance of a Convolutional Neural Network (CNN) for facial emotion recognition using the FER2013 dataset. The FER2013 dataset consists of grayscale facial images with a resolution of 48×48 pixels and includes seven emotion classes: angry, disgust, fear, happy, neutral, sad, and surprise. Due to its low image resolution and imbalanced class distribution, FER2013 presents significant challenges for emotion classification tasks. An experimental research approach was employed by implementing a baseline CNN architecture composed of convolutional, pooling, and fully connected layers. Image normalization and batch-based data generation were applied during preprocessing. The model was trained using the Adam optimizer with categorical cross-entropy loss, and an early stopping mechanism was utilized to prevent overfitting. Model performance was evaluated using accuracy, precision, recall, F1-score, and confusion matrix analysis. The experimental results show that the proposed CNN model achieved an overall test accuracy of 55.50%. Emotions with distinctive facial features, such as happy and surprise, obtained higher F1-scores, while minority and visually subtle classes, particularly disgust and fear, exhibited lower performance. These findings indicate that a simple CNN architecture can provide reasonable performance on challenging facial emotion datasets while highlighting the impact of class imbalance and limited image resolution. The proposed model can serve as a baseline for further improvements in facial emotion recognition systems.
Copyrights © 2026