Communication for the deaf and hard of hearing is often hindered by the limited number of sign language interpreters. This research aims to develop a web-based text-to-text sign language translation system using Convolutional Neural Networks (CNN) to bridge this communication gap. The system is built with the ASL Alphabet dataset containing 87,000 images from 29 classes (A-Z, SPACE, DELETE, NOTHING). The CNN model was designed with three convolutional layers and trained for 15 epochs using 80% of the data, while 20% of the data was used for testing. The user interface was developed using Streamlit for ease of use. Training results showed a training accuracy of 98.96% and a validation accuracy of 98.61% at the 15th epoch. Model evaluation yielded an overall accuracy of 98%, with high precision, recall, and F1-score values for most classes. This research demonstrates the significant potential of CNN in developing automatic sign language translators, which is expected to improve information accessibility and inclusivity for the deaf community.
Copyrights © 2025