The recognition of Sign Language Alphabets (SLA) plays a vital role in human-computer interaction, especially for individuals with auditory disabilities. This study aims to evaluate the impact of different hardware configurations—specifically CPU, GPU, and memory setups—on the training efficiency and recognition performance of a Convolutional Neural Network (CNN)-based model for SLA using the SIBI dataset. The novelty of this research lies in its focus on hardware-aware deep learning optimization for Indonesian sign language (SIBI), an underexplored area. The model was trained on 3,468 labeled hand gesture images representing 24 SIBI alphabet signs. Experiments were conducted on CPU (Intel Xeon 2.00 GHz) and GPU (Nvidia Tesla T4) platforms using a consistent CNN architecture. The training time was significantly reduced by 45.5%, from 1 hour 39 minutes to just 54 minutes, while the accuracy remained consistent at 96.7%, showing no significant change between the two setups. These results demonstrate the significance of parallel processing and memory bandwidth in enhancing model convergence and generalization. The findings are relevant for real-time SLA deployment with hardware constraints on embedded or mobile platforms. Overall, the study underscores the importance of hardware optimization in accelerating CNN training and improving performance in sign language recognition systems.
Copyrights © 2025