Existing anime recommendation systems focus on genre preferences and viewing history without considering users' emotional states, leading to context-blind recommendations that may exacerbate negative moods and reduce satisfaction. Most existing systems employ outdated architectures with limited accuracy and lack diversification mechanisms to prevent filter bubbles. This study develops an emotion-based anime recommendation system integrating YOLOv11 for facial emotion recognition with hybrid collaborative filtering using LightFM and Maximum Marginal Relevance diversification. The primary novelty lies in seamlessly combining YOLOv11's superior emotion recognition, LightFM's hybrid matrix factorization for cold-start mitigation, and MMR diversification for preventing filter bubbles while maintaining emotional congruence. The methodology employed the KDEF dataset (3,597 images, five emotion classes) for training YOLOv11 with data augmentation, and the MyAnimeList dataset (744,330 interactions) for recommendation modeling. Emotion-to-genre mappings informed by survey data from 51 participants were implemented with MMR diversification to balance relevance and variety. The YOLOv11 model achieved 93.70% validation accuracy, outperforming CNN-LSTM approaches by 37.55 percentage points. The hybrid recommendation model demonstrated test AUC of 0.8567 and Precision@10 of 0.1457, representing 417% improvement over pure collaborative filtering, while diversification increased genre representation by 20.9% with minimal precision loss. This system demonstrates real-time applicability for streaming platforms through camera-based emotion capture and immediate recommendation generation, enhancing user engagement and emotional well-being. The integration represents a significant advancement toward affective computing in entertainment media.
Copyrights © 2026