This research is motivated by the constraints of the community when communicating with people with disabilities, with the aim of implementing computer vision to translate SIBI sign language to text through Android applications. Data collection involved interviews and literature studies, as well as system development using agile methods. The front-end was created with Kotlin and Jetpack Compose, while the back-end used TensorFlow models in .tflite format. The model training results alone achieved 88% accuracy. The application was tested by comparing the results of manual translation. The test showed 81.48% accuracy in bright rooms, 76.92% in dim rooms, and 80.77% outdoors. Suggestions for future research to improve accuracy in dimly lit places by processing images into negatives or adding features to turn on the flashlight.
Copyrights © 2024