Advances in technology have paved the way for innovative solutions aimed at improving thewell-being of the lives of deaf and speech-impaired individuals. This paper presents the designof TFOOD (TensorFlow Object Detection), a smart application designed to facilitate digitalcommunication for deaf and speech impaired people. Utilizing TensorFlow object detectiontechnology, TFOOD translates visual cues, such as SIBI and BISINDO sign language cues,into text or audio output that is easily understood by others. Prototype of “TFOOD” applicationwhich is designed to help reduce the difficulties faced by people with hearing and speechimpairments. These applications involve training sophisticated models on diverse data setsand optimizing algorithms to ensure high accuracy and responsiveness in real-worldscenarios. Results show that TFOOD significantly increases communication accessibility fordeaf and speech-impaired individuals, providing an effective tool for interacting in a variety ofsocial and professional contexts. This paper also discusses the challenges faced duringimplementation, including model accuracy and integration, and discusses potential futuredevelopments to further improve the system's capabilities and accessibility. Through thisexploration, TFOOD demonstrates its contribution to digital inclusion and offers insight intothe development of supporting communications technologies.