Claim Missing Document
Check
Articles

Found 1 Documents
Search

Application of You Only Look Once (YOLO) Method for Sign Language Identification Reni Triyaningsih; Pradita Eko Prasetyo Utomo; Benedika Ferdian Hutabarat
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 14 No 4: November 2025
Publisher : This journal is published by the Department of Electrical and Information Engineering, Faculty of Engineering, Universitas Gadjah Mada.

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/jnteti.v14i4.21931

Abstract

Limited understanding of sign language has widened the social gap for deaf people, creating barriers in communication and social interaction. To address this challenge, technology-based solutions are required to facilitate inclusive communication. Deep learning-based detection methods, particularly the You Only Look Once (YOLO) algorithm, have gained attention for their speed and accuracy in real-time object detection. This research aims to develop and evaluate a YOLO training model for the identification of Indonesian sign language system (sistem isyarat bahasa Indonesia, SIBI). The dataset was obtained from resource person at the State Special School Prof. Dr. Sri Soedewi Masjchun Sofwan, SH. Jambi, and enriched with additional images collected from external subjects. Augmentation techniques with Roboflow were applied to expand the dataset, and several training schemes were implemented. Model performance was assessed using confusion matrix while considering accuracy and indications of overfitting. The results showed that the quality and quantity of training data, as well as the epoch values, strongly influenced the accuracy of the trained model. The best performance was achieved with 40 primary images per label class, augmented to 60 images, and trained over 24 epochs, resulting in a confusion matrix accuracy of 99.9%. The implemented model was able to recognize SIBI gestures in real-time using a webcam with fast processing. Overall, the proposed YOLO-based model successfully identifies sign language in real-time and demonstrates strong potential for reducing communication barriers among deaf people. However, further refinement and expansion of the dataset are recommended to improve effectiveness and enable broader real-world applications.