Emotion recognition plays a critical role in intelligent e-learning systems by enabling adaptive feedback and timely pedagogical interventions based on students’ affective states. However, most existing approaches rely heavily on visual facial cues, which are highly vulnerable to real-world conditions such as low-resolution video, partial facial occlusion, poor lighting, and unstable network connections commonly encountered in online learning environments. These limitations significantly degrade the performance of unimodal deep learning models. To address this challenge, this study proposes a multimodal deep learning framework for student emotion recognition that is robust to low-quality and occluded video input. The proposed model integrates visual and audio modalities through a hybrid architecture, combining a lightweight CNN-based visual feature extractor with a BiLSTM-based speech emotion model. An attention-based fusion mechanism is employed to adaptively weight cross-modal features, allowing the system to compensate for degraded or missing visual information using complementary acoustic cues. Experimental evaluations are conducted using publicly available datasets representative of realistic online learning scenarios, including DAiSEE and RAVDESS, with additional augmentation to simulate varying levels of occlusion and video degradation. The results demonstrate that the multimodal approach consistently outperforms unimodal baselines, particularly under high occlusion conditions, while maintaining computational efficiency suitable for near real-time deployment. These findings confirm that multimodal fusion with attention mechanisms provides a more resilient and practical solution for emotion-aware e-learning systems operating under non-ideal input conditions