This study addresses the limited availability of automated recognition systems for Arabic Alphabet Sign Language (ArSL), particularly in facilitating Qur’anic Tadarus for the deaf and hard-of-hearing community. While research on American and Indonesian sign languages has advanced significantly, ArSL studies, especially for static alphabet gestures, remain underrepresented. The aim of this research is to develop an accurate and efficient ArSL classifier using the VGG16 convolutional neural network with transfer learning. The study employs the publicly available RGB Arabic Alphabets Sign Language Dataset, comprising 7,856 annotated images across 31 Hijaiyah letters, collected under varied backgrounds and lighting conditions. The proposed model integrates pretrained ImageNet weights with a customized classification head, trained through a two-stage fine-tuning process with data augmentation. The model achieves 97.07% test accuracy, performing competitively against a ResNet-18 baseline (98.0%) while offering a simpler architecture suitable for resource-constrained deployments. Evaluation using precision, recall, F1-score, and confusion matrix shows consistently high performance, with minor misclassifications among visually similar letters. This work demonstrates a novel application of VGG16-based deep learning for ArSL recognition, contributing to inclusive religious education and accessibility technologies.
Copyrights © 2025