Sign language recognition (SLR) plays a crucial role in improving communication for deaf individuals. This paper investigates the recognition of sign language through deep learning models based on action features using Skeleton data from the Argentinian Sign Language (LSA64) dataset. The models explored include Multi-layer Perceptron (MLP) Neural Network, and Long Short-Term Memory (LSTM). The MLP Neural Network, utilizing multiple layers of perceptrons, reached an accuracy of 96.10%. The LSTM model, excelling in processing sequential data, attained the highest accuracy at 98.60%. These results demonstrate the effectiveness of deep learning models in sign language recognition, with LSTM showing the most promise due to its ability to effectively capture temporal dynamics. Consequently, this study opens up prospects for applying sign language recognition technology in practice, contributing to enhancing the quality of life for deaf individuals.
Copyrights © 2025