Sign language is the primary means of communication for people with hearing impairments. However, the public's limited understanding of Indonesian Sign Language (BISINDO) remains a communication barrier. This study implemented machine learning with a Convolutional Neural Network (CNN) model to automatically recognize BISINDO gestures. The dataset consists of 2,600 manually captured hand images representing the letters A–Z. The training process was carried out through data pre-processing, image augmentation, and CNN parameter optimization. Test results showed that the system was able to recognize BISINDO letters with high accuracy and could combine letters into simple words such as "HAI", "SAYA", and "UMI" in real-time. This study demonstrates that CNN is effective in supporting a computer-based sign language translation system, thus becoming an inclusive communication solution for people with hearing impairments.
Copyrights © 2025