This Author published in this journals
All Journal ILKOM Jurnal Ilmiah
Islami, Megan Shahra
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

SqueezeNet Image Embedding and Support Vector Machine for Recognizing Hand Gestures in Indonesian Sign Language System Islami, Megan Shahra; Jamzuri, Eko Rudiawan
ILKOM Jurnal Ilmiah Vol 17, No 2 (2025)
Publisher : Prodi Teknik Informatika FIK Universitas Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33096/ilkom.v17i2.2476.98-106

Abstract

This research proposes a hand gesture recognition method for the Sistem Isyarat Bahasa Indonesia (SIBI) sign language, integrating SqueezeNet for image feature extraction and Support Vector Machine (SVM) for classification. The study focuses on 24 static gestures representing alphabetic letters, excluding J and Z due to their motion-based representation. The dataset consists of 5280 RGB images (227×227 pixels), with 220 samples per gesture, obtained from a public Kaggle source. SqueezeNet, a lightweight CNN architecture, is used to generate 1000-dimensional feature vectors, which are then classified using an SVM with an RBF kernel (C = 1.0) to effectively handle non-linear decision boundaries. A 10-fold cross-validation was applied without data augmentation to evaluate baseline performance. The proposed method achieved 99.51% classification accuracy, with an average precision of 94.04%, recall of 94.02%, and F1-score of 94.02%. Certain gestures, such as G, H, and Q, achieved near-perfect recognition, while others, like V, presented greater classification challenges with a recall of 80.5%. Compared to existing models such as MobileNet (98% accuracy) and VGG16 (86% accuracy) on the same dataset, the SqueezeNet–SVM combination provides competitive or superior accuracy with significantly reduced computational requirements. These results highlight the method’s potential for real-time integration into mobile or embedded sign language translation applications, bridging communication gaps between the deaf and hearing communities. Future work will focus on improving performance for difficult gestures, applying data augmentation to enhance generalization, and developing a prototype mobile application for real-world testing in relevant environments.