Claim Missing Document
Check
Articles

Found 32 Documents
Search

PENERAPAN ALGORITMA MOBILENETV2 UNTUK KLASIFIKASI HURUF HIJAIYAH BERBASIS GESTUR TANGAN Riswan, Muh.; Wahyuni, Titin; Danuputri, Chyquitha; Habi Talib, Emil Agusalim; Faisal, Muhammad; Anas, Lukman; Agung, Andi
PROGRESS Vol 18 No 1 (2026): April
Publisher : P3M STMIK Profesional Makassar

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56708/progres.v18i1.535

Abstract

The digitalization of religious education offers significant opportunities to enhance Hijaiyah letter learning, particularly for the hearing-impaired community through visual gesture recognition. This study aims to develop and evaluate a real-time web-based classification system for 28 Hijaiyah hand gestures using the MobileNetV2 architecture. The research methodology involves a quantitative approach utilizing transfer learning with a balanced dataset of augmented images. The model was trained using fine-tuning techniques and deployed on a web platform using TensorFlow.js and MediaPipe for efficient on-device inference. Experimental results demonstrate that the model achieved an overall accuracy of 84% on the independent test set, with specific classes reaching near-perfect detection in real-time scenarios, although misclassification persisted among visually similar gestures. The system effectively balances computational efficiency with classification performance, minimizing latency during user interaction. In conclusion, the implementation of MobileNetV2 facilitates a responsive and accessible educational tool, proving the viability of computer vision in creating inclusive religious learning environments without requiring complex server-side infrastructure.
Student Emotion Recognition from Low-Quality Videos Using Multimodal Deep Learning TAIBA, ANDI MAWADDA TAIBA MAWADDA; Bakti, Rizki Yusliana; Faisal, Muhammad; S. Kuba, Muhammad Syafaat; Anas, Lukman; H. T, Emil Agusalim; Rahman, Fahrim I.
JURNAL INFOTEL Vol 18 No 1 (2026): February
Publisher : LPPM INSTITUT TEKNOLOGI TELKOM PURWOKERTO

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20895/infotel.v18i1.1523

Abstract

Emotion recognition plays a critical role in intelligent e-learning systems by enabling adaptive feedback and timely pedagogical interventions based on students’ affective states. However, most existing approaches rely heavily on visual facial cues, which are highly vulnerable to real-world conditions such as low-resolution video, partial facial occlusion, poor lighting, and unstable network connections commonly encountered in online learning environments. These limitations significantly degrade the performance of unimodal deep learning models. To address this challenge, this study proposes a multimodal deep learning framework for student emotion recognition that is robust to low-quality and occluded video input. The proposed model integrates visual and audio modalities through a hybrid architecture, combining a lightweight CNN-based visual feature extractor with a BiLSTM-based speech emotion model. An attention-based fusion mechanism is employed to adaptively weight cross-modal features, allowing the system to compensate for degraded or missing visual information using complementary acoustic cues. Experimental evaluations are conducted using publicly available datasets representative of realistic online learning scenarios, including DAiSEE and RAVDESS, with additional augmentation to simulate varying levels of occlusion and video degradation. The results demonstrate that the multimodal approach consistently outperforms unimodal baselines, particularly under high occlusion conditions, while maintaining computational efficiency suitable for near real-time deployment. These findings confirm that multimodal fusion with attention mechanisms provides a more resilient and practical solution for emotion-aware e-learning systems operating under non-ideal input conditions