Eka Prakarsa Mandyartha
Informatics, Faculty Computer Science, Universitas Pembangunan Nasional “Veteran” Jawa Timur, Indonesia

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

DETECTION OF ACTIONS BISINDO (INDONESIAN SIGN LANGUAGE) INTO TEXT-TO-SPEECH USING LONG SHORT-TERM MEMORY WITH MEDIAPIPE HOLISTICS Risda Rosdiana Agustin; Hendra Maulana; Eka Prakarsa Mandyartha
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 4 (2024): JUTIF Volume 5, Number 4, August 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.4.1492

Abstract

Sign language is frequently used by those who have difficulty hearing or speaking to communicate. Because it is a non-verbal language that expresses meaning through hand and body gestures, sign language is an essential form of communication for people who rely on it. The objective of this work is to develop a detection that can understand actions made in Indonesian Sign Language (BISINDO), translate them into text, and use speech recognition (Text- to-Speech) to provide audio responses. In particular at Sekolah Luar Biasa, the main objective is to assist and enhance communication among persons with impairments. Long Short-Term Memory (LSTM) and Mediapipe Holistics are use to achieve its objectives. It is demonstrated how LSTM and Mediapipe Holistics enhance performance and accuracy using two different dataset types. The first dataset landmarks created using the Mediapipe Holistics model, while the second dataset provides original shots devoid of landmarks. Batch size and epoch settings are among the many parameters needed for training and testing processes. Model using the landmark-free dataset only manages to reach an accuracy of approximately 89.33%, the model using the landmark with mediapipe of accuracy of about 96.67%. Furthermore, the landmark-based model exhibits strong F1 scores, recall, and precision. The research successfully recognizes a number of BISINDO acts, such as "saya" (I), "kamu" (you), "ayah" (father), "ibu" (mother), and others present in the dataset. On the basis of the gestures it has identified can also make speech.