Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Jurnal Teknik Informatika (JUTIF)

A STUDY OF WORLDWIDE PATTERNS IN ALPHABET SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL AND RECURRENT NEURAL NETWORKS Rakhmadi, Aris; Yudhana, Anton; Sunardi, Sunardi
Jurnal Teknik Informatika (Jutif) Vol. 6 No. 1 (2025): JUTIF Volume 6, Number 1, February 2025
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2025.6.1.4202

Abstract

Sign Language Recognition (SLR) has become an essential area of research due to its potential to promote understanding between the deaf and hearing communities through communication. This paper provides an in-depth study of various methodologies and models employed in SLR, focusing on Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). We analyze their application to datasets from various sign languages, such as Arabic Sign Language (ArSL), American Sign Language (ASL), and British Sign Language (BSL), and explore how these models improve the recognition of dynamic, multi-dimensional hand gestures. This research not only advances the understanding of deep learning applications in sign language recognition but also addresses critical challenges in data processing and real-time applications, paving the way for inclusive technologies in informatics and human-computer interaction. Despite the progress in applying deep learning techniques to SLR, several challenges remain, particularly in dataset limitations, handling large vocabularies, and ensuring consistent performance across diverse environments and signers. The paper also investigates the broader applications of SLR, such as virtual reality, healthcare, education, and accessibility, and discusses the integration of SLR with human-computer interaction systems. Furthermore, it highlights current limitations in the field, such as difficulties with video data handling, the need for standard datasets, and issues related to training computational models. Finally, the paper outlines future research directions, including developing more robust SLR systems that can function effectively in uncontrolled environments, improving data collection methodologies, and creating real-time, user-friendly applications to assist the community of deaf and hard-of-hearing individuals.
VGG16-Based Feature Extraction for Arabic Alphabet Sign Language Classification to Support Qur'anic Tadarus Accessibility Rakhmadi, Aris; Yudhana, Anton; Sunardi, Sunardi
Jurnal Teknik Informatika (Jutif) Vol. 6 No. 4 (2025): JUTIF Volume 6, Number 4, Agustus 2025
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2025.6.4.4953

Abstract

This study addresses the limited availability of automated recognition systems for Arabic Alphabet Sign Language (ArSL), particularly in facilitating Qur’anic Tadarus for the deaf and hard-of-hearing community. While research on American and Indonesian sign languages has advanced significantly, ArSL studies, especially for static alphabet gestures, remain underrepresented. The aim of this research is to develop an accurate and efficient ArSL classifier using the VGG16 convolutional neural network with transfer learning. The study employs the publicly available RGB Arabic Alphabets Sign Language Dataset, comprising 7,856 annotated images across 31 Hijaiyah letters, collected under varied backgrounds and lighting conditions. The proposed model integrates pretrained ImageNet weights with a customized classification head, trained through a two-stage fine-tuning process with data augmentation. The model achieves 97.07% test accuracy, performing competitively against a ResNet-18 baseline (98.0%) while offering a simpler architecture suitable for resource-constrained deployments. Evaluation using precision, recall, F1-score, and confusion matrix shows consistently high performance, with minor misclassifications among visually similar letters. This work demonstrates a novel application of VGG16-based deep learning for ArSL recognition, contributing to inclusive religious education and accessibility technologies.