Sign language is a harmonious combination of hand gestures, postures, and facial expressions. One of the most used and also the most researched Sign Language is American Sign Language (ASL) because it is easier to implement and also more common to apply on a daily basic. More and more research related to American Sign Language aims to make it easier for the speech impaired to communicate with other normal people. Now, American Sign Language research is starting to refer to the vision of computers so that everyone in the world can easily understand American Sign Language through machine learning. Technology continues to develop sign language translation, especially American Sign Language using the Convolutional Neural Network. This study uses the Densenet201 and DenseNet201 PyTorch architectures to translate American Sign Language, then display the translation into written form on a monitor screen. There are 4 comparisons of data splits, namely 90:10, 80:20, 70:30, and 60:30. The results showed the best results on DenseNet201 PyTorch in the train-test dataset comparison of 70:30 with an accuracy of 0.99732, precision of 0.99737, recall (sensitivity) of 0.99732, specificity of 0.99990, F1-score of 0.99731, and error of 0.00268. The results of the translation of American Sign Language into written form were successfully carried out by performance evaluation using ROUGE-1 and ROUGE-L resulting in a precision of 0.14286, Recall (sensitivity) 0.14286, and F1-score.
Copyrights © 2024