This Author published in this journals
All Journal Jurnal Mandiri IT
AlFatrah, M. Ilham
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Artificial intelligence-based hand gesture recognition for sign language interpretation Rais, M. Fazil; AlFatrah, M. Ilham; Noorta, Chadafa Zulti; Rimbawa, H.A Danang; Atturoybi, Abdurrosyid
Jurnal Mandiri IT Vol. 14 No. 1 (2025): July: Computer Science and Field.
Publisher : Institute of Computer Science (IOCS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35335/mandiri.v14i1.395

Abstract

This paper presents an artificial intelligence-based system for real-time hand gesture recognition to support sign language interpretation for the deaf and hard-of-hearing community. The proposed system integrates computer vision techniques with deep learning models to accurately identify static hand gestures representing alphabetic signs. The MediaPipe framework is employed to detect and track hand landmarks from live video input, which are then processed and classified using a Convolutional Neural Network (CNN) model. The model is trained on a publicly available BISINDO (Bahasa Isyarat Indonesia) gesture dataset retrieved from Kaggle, comprising 312 images across 26 hand gestures captured under multiple background conditions. Preprocessing includes resizing, grayscale conversion, data augmentation, and landmark extraction with specific innovations in preprocessing techniques, such as the use of advanced data augmentation methods and landmark normalization, which significantly enhance gesture identification accuracy and model robustness. Experimental results show that the system achieves an average classification accuracy of 88.03% and maintains stable performance in real-time applications. Despite these promising results, the system exhibits limitations, including challenges with dynamic gesture recognition, background interference, and limited handling of complex hand movements, all of which can be explored in future research to improve the system’s accuracy and generalization. These findings highlight the system’s potential as an inclusive communication tool to bridge language barriers between deaf individuals and non-signers. This research contributes to the development of accessible assistive technologies by demonstrating a non-intrusive, vision-based approach to sign language interpretation. Future development may involve dynamic gesture translation, sentence-level recognition, and deployment on mobile platforms.