Music's profound impact on human emotions is essential for creating personalized experiences in entertainment and therapeutic settings. This study introduces a cutting-edge music recommendation system that utilizes facial expression analysis to tailor music suggestions according to the user's emotional state. Our approach integrates a haar-cascade classifier for real-time face detection with a Convolutional Neural Network (CNN) that classifies emotions into seven distinct categories: happiness, sadness, anger, fear, disgust, surprise, and neutrality. This emotionally aware system recommends music tracks corresponding to the user's current emotional condition to enhance mood regulation and overall listener satisfaction. The effectiveness of our system was evaluated through rigorous testing, where the CNN model demonstrated a high degree of accuracy. Notably, the model achieved an overall accuracy of 84.44% in recognizing facial expressions. Precision, recall, and F1 scores consistently exceeded 84%, indicating robust performance across diverse emotional states. These results underscore the system's capability to accurately interpret and respond to complex emotional cues through tailored music suggestions. Integrating advanced deep learning techniques for face and emotion recognition enables our recommendation system to adapt dynamically to the user's emotional fluctuations. This responsiveness ensures a highly personalized music listening experience that reflects the user's feelings and potentially enhances their emotional well-being. By bridging the gap between static user profiles and the dynamic nature of human emotions, our system sets a new standard for personalized technology in music recommendation, promising significant improvements in user engagement and satisfaction.
Copyrights © 2025