p-Index From 2020 - 2025
0.408
P-Index
This Author published in this journals
All Journal Jurnal Mandiri IT
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Artificial intelligence-based hand gesture recognition for sign language interpretation Rais, M. Fazil; AlFatrah, M. Ilham; Noorta, Chadafa Zulti; Rimbawa, H.A Danang; Atturoybi, Abdurrosyid
Jurnal Mandiri IT Vol. 14 No. 1 (2025): July: Computer Science and Field.
Publisher : Institute of Computer Science (IOCS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35335/mandiri.v14i1.395

Abstract

This paper presents an artificial intelligence-based system for real-time hand gesture recognition to support sign language interpretation for the deaf and hard-of-hearing community. The proposed system integrates computer vision techniques with deep learning models to accurately identify static hand gestures representing alphabetic signs. The MediaPipe framework is employed to detect and track hand landmarks from live video input, which are then processed and classified using a Convolutional Neural Network (CNN) model. The model is trained on a publicly available BISINDO (Bahasa Isyarat Indonesia) gesture dataset retrieved from Kaggle, comprising 312 images across 26 hand gestures captured under multiple background conditions. Preprocessing includes resizing, grayscale conversion, data augmentation, and landmark extraction with specific innovations in preprocessing techniques, such as the use of advanced data augmentation methods and landmark normalization, which significantly enhance gesture identification accuracy and model robustness. Experimental results show that the system achieves an average classification accuracy of 88.03% and maintains stable performance in real-time applications. Despite these promising results, the system exhibits limitations, including challenges with dynamic gesture recognition, background interference, and limited handling of complex hand movements, all of which can be explored in future research to improve the system’s accuracy and generalization. These findings highlight the system’s potential as an inclusive communication tool to bridge language barriers between deaf individuals and non-signers. This research contributes to the development of accessible assistive technologies by demonstrating a non-intrusive, vision-based approach to sign language interpretation. Future development may involve dynamic gesture translation, sentence-level recognition, and deployment on mobile platforms.
Machine learning-based approach for evaluating physical fitness through motion detection Rais, M. Fazil; Chadafa Zulti Noorta; M. Ilham AlFatrah; H.A Danang Rimbawa; Fatmawati, Uvi Desi
Jurnal Mandiri IT Vol. 14 No. 1 (2025): July: Computer Science and Field.
Publisher : Institute of Computer Science (IOCS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35335/mandiri.v14i1.406

Abstract

Physical fitness assessment is crucial for evaluating an individual's physical performance and endurance. However, traditional methods often rely on manual observation, which can lead to subjectivity and inconsistent results. This study proposes a machine learning-based approach for physical fitness evaluation through motion detection using pose estimation and exercise classification models. A quantitative method was employed to train and evaluate models for four exercise types: push-ups, sit-ups, pull-ups, and chinning. Each model was trained separately and assessed using accuracy, precision, recall, and F1-score metrics, achieving accuracies of 97.50% for push-ups, 97.67% for sit-ups, 97.00% for pull-ups, and 98.50% for chinning. The maximum error margin compared to manual counting was 2.48%. System-generated outputs were validated against manual observations using standard evaluation matrices. These findings indicate that machine learning can offer a reliable, consistent, and automated solution for physical fitness assessment, with the potential to enhance training programs, support remote fitness monitoring, and reduce human error in performance evaluation.