Eka Srijayarni
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : Information Technology Education Journal

Development of an Inclusive Computer Vision–Based Learning Media for Gesture Recognition among Deaf and Hard of Hearing Students Andi Sriwangi B; Abdul Riyadi Lessy; Aswar Munandar; Ade Ayu Permataleli; Eka Srijayarni; Gita Damayanti
Information Technology Education Journal Vol. 3, No. 1, Januari (2024)
Publisher : Jurusan Teknik Informatika dan Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59562/intec.v3i1.02419

Abstract

This study aimed to develop and evaluate an inclusive computer vision–based learning media for gesture recognition among deaf and hard-of-hearing (DHH) students. The motivation was to address the lack of interactive, curriculum-aligned tools for BISINDO gesture acquisition, enhancing both learning accuracy and engagement. The research followed a research and development (R&D) design using the ADDIE model, integrated with a quasi-experimental evaluation. The system employed a CNN-LSTM hybrid model with MediaPipe pose estimation for real-time gesture recognition. A purposive sample of 60 junior secondary DHH students in Makassar, Indonesia, participated in the study. Pretest–posttest scores, learning engagement questionnaires, and system usability scales were administered. Data analysis included paired and independent samples t-tests, descriptive statistics, and Pearson correlation. The developed media demonstrated high recognition accuracy (93.4%) and excellent usability (SUS = 84.3/100). Students using the system significantly outperformed the control group in posttest scores (Mean gain = 8.36 vs. 3.54, p < 0.001, Cohen’s d = 1.56). Engagement positively correlated with learning gains (r = 0.68, p < 0.01), indicating that interactive feedback mechanisms enhanced motivation and gesture mastery. The study highlights the pedagogical value of AI-assisted gesture learning for DHH students, but is limited to isolated gesture recognition, one geographic region, and quasi-experimental design constraints. Environmental lighting and camera quality may also affect system performance. This study bridges the gap between computer vision–based sign recognition research and inclusive pedagogy, demonstrating both technological feasibility and educational impact. Future research could extend the system to continuous sign sequences, multimodal feedback, and cloud-based deployment for broader scalability.