Sign Language Recognition (SLR) has become an essential area of research due to its potential to promote understanding between the deaf and hearing communities through communication. This paper provides an in-depth study of various methodologies and models employed in SLR, focusing on Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). We analyze their application to datasets from various sign languages, such as Arabic Sign Language (ArSL), American Sign Language (ASL), and British Sign Language (BSL), and explore how these models improve the recognition of dynamic, multi-dimensional hand gestures. This research not only advances the understanding of deep learning applications in sign language recognition but also addresses critical challenges in data processing and real-time applications, paving the way for inclusive technologies in informatics and human-computer interaction. Despite the progress in applying deep learning techniques to SLR, several challenges remain, particularly in dataset limitations, handling large vocabularies, and ensuring consistent performance across diverse environments and signers. The paper also investigates the broader applications of SLR, such as virtual reality, healthcare, education, and accessibility, and discusses the integration of SLR with human-computer interaction systems. Furthermore, it highlights current limitations in the field, such as difficulties with video data handling, the need for standard datasets, and issues related to training computational models. Finally, the paper outlines future research directions, including developing more robust SLR systems that can function effectively in uncontrolled environments, improving data collection methodologies, and creating real-time, user-friendly applications to assist the community of deaf and hard-of-hearing individuals.
Copyrights © 2025