Sign language recognition plays a vital role in facilitating communication for individuals with hearing impairments. This study proposes a Convolutional Neural Network (CNN) model trained to recognize patterns in sign language images with the aim of improving the accuracy and efficiency of sign language recognition systems. The model was trained in two stages with the first training session achieving a validation accuracy of around 63%, while the second training session yielded an impressive validation accuracy exceeding 92% at epoch 29. This significant improvement demonstrates the model’s ability to effectively learn and generalize complex patterns in sign language images, signaling its potential for practical applications in sign language interpretation. The high accuracy achieved by the CNN model demonstrates its suitability for use in a variety of real-world scenarios, such as assistive technology for the deaf community or automation systems requiring hand gesture recognition. Thus, the trained CNN model has the potential to be a valuable tool in improving the accessibility and efficiency of communication for individuals who rely on sign language.