Communication with people with hearing and speech disabilities is often challenging. Sign language is the primary tool that helps them convey thoughts and feelings, but it is often difficult for those who are not used to it to understand. This project aims to develop a machine learning model to recognize hand gestures in spelling fingers using American Sign Language (ASL). The model uses image data and Computer Vision techniques to train a deep learning algorithm that can recognize signals in real-time through a camera. The system utilizes deep neural networks that work through layers of nodes to process, classify, and predict cues accurately.