Alexithymia is a condition characterized by difficulty in identifying and verbally expressing emotions, which can hinder an individual's ability to understand and manage their emotions. This study aims to implement and develop a model that can detect emotions using the MobileNetV2 architecture for therapy purposes for individuals experiencing alexithymia. The method uses the FER-2013 dataset, which consists of 35,887 grayscale facial images in 7 emotion categories: anger, disgust, fear, happiness, neutral, sadness, and surprise. Using a deep learning approach based on CRISP-DM, the research begins with normalization and data augmentation to improve the model's resilience to image variations. The developed model achieved a training accuracy of 67.7% and a validation accuracy of 65.3%, demonstrating significant capability in recognizing and classifying emotions from facial images. Evaluation using a confusion matrix showed that the model produced a precision of 64.9%, a recall of 65.4%, and an F1-score of 63.7% for each emotion class. This research implies the potential for developing systems that can support psychological therapy, especially to help individuals with alexithymia understand and manage their emotions through facial expression analysis, providing technology sensitive to emotional expressions.
Copyrights © 2024