The digitalization of religious education offers significant opportunities to enhance Hijaiyah letter learning, particularly for the hearing-impaired community through visual gesture recognition. This study aims to develop and evaluate a real-time web-based classification system for 28 Hijaiyah hand gestures using the MobileNetV2 architecture. The research methodology involves a quantitative approach utilizing transfer learning with a balanced dataset of augmented images. The model was trained using fine-tuning techniques and deployed on a web platform using TensorFlow.js and MediaPipe for efficient on-device inference. Experimental results demonstrate that the model achieved an overall accuracy of 84% on the independent test set, with specific classes reaching near-perfect detection in real-time scenarios, although misclassification persisted among visually similar gestures. The system effectively balances computational efficiency with classification performance, minimizing latency during user interaction. In conclusion, the implementation of MobileNetV2 facilitates a responsive and accessible educational tool, proving the viability of computer vision in creating inclusive religious learning environments without requiring complex server-side infrastructure.
Copyrights © 2026