Deaf people in Indonesia face communication barriers due to the limited understanding of Indonesian Sign Language (BISINDO) among the general public. This results in limited social interaction between deaf people and their surroundings. This study aims to develop a Deep Learning and Computer Vision-based BISINDO alphabet translator model using the Convolutional Neural Network (CNN) method, addressing the limited availability of publicly documented BISINDO datasets for alphabet recognition . The method used involves training the model with a dataset of 3,120 BISINDO alphabet images, covering the letters A to Z. The dataset was divided into 80% for training and 20% for testing. The training process included model architecture design, parameter tuning, selection of the best model based on accuracy, and performance evaluation. The evaluation results showed that the developed CNN model achieved an accuracy of 99.84% in classifying BISINDO letters; however, challenges remain in generalizing the model to variations in lighting, hand orientation, and user differences. Nevertheless, the high accuracy achieved indicates the model’s potential to support effective BISINDO translation and improve communication accessibility. This research also opens up opportunities for further development towards comprehensive translation of gestures or sentences in BISINDO.
Copyrights © 2025