Sign language is the primary means of communication for deaf individuals, one of the alternative languages used by people with disabilities, and it has evolved from the deaf community. Sign language has many variations, making it something unfamiliar and difficult to interpret for some hearing or uninitiated people. This research aims to develop a real-time sign language motion and facial expression detection system using the Random Forest method. The main challenge in this detection is the complexity and variation of the movements and facial expressions. In this study, MediaPipe is used to extract features from video input, which are then analyzed using the Random Forest algorithm for classification. In this research, the model's evaluation results use a confusion matrix with testing scenarios based on the division of training and testing data. From the model evaluation results, an accuracy of 99% was achieved. This research is expected to help deaf individuals communicate with hearing people, thereby reducing social gaps.
Copyrights © 2025