This study developed a web-based application to facilitate two-way communication between individuals with hearing impairments and the general public. The application translated hand gestures based on the Indonesian Sign System into text using a Convolutional Neural Network model and real-time landmark detection. Additionally, it converted spoken language into text through speech recognition technology, which was then displayed alongside the corresponding sign language images. The system used a camera to capture hand gestures, which were processed into landmark data and classified into letters A to Z. Voice input was processed directly in the browser without additional installations. The application was designed to be lightweight, interactive, and compatible with various devices. Testing results showed that the gesture recognition feature achieved high accuracy, ranging from 98.71% to 100%. The speech-to-text feature also provided accurate transcription results, both for individual letters and complete sentences. Accuracy decreased at distances beyond 30 cm and in noisy environments. The integration of gesture recognition and speech-to-text conversion in a single web platform offered an effective, accessible, and inclusive communication solution for users with special needs.
Copyrights © 2025