Electromyography (EMG) is one of the most essential bio signals for developing human–machine interfaces capable of translating muscle activity into motion commands, particularly in prosthetic and assistive robotic systems. However, the nonlinear characteristics of EMG, its susceptibility to noise, and its strong dependence on electrode placement make gesture classification a challenging task. This study aims to classify EMG signals for robotic hand control using a deep learning approach based on Convolutional Neural Networks (CNNs). The dataset consisted of 11,678 samples recorded from eight EMG channels across four hand gestures, preprocessed using a Butterworth filter and normalization prior to training with a lightweight CNN architecture. The model performance was evaluated using accuracy, precision, recall, and F1-score. The proposed model achieved an accuracy of 93%, outperforming Support Vector Machines (SVM), k-nearest neighbors (k-NN), and random forests under identical experimental conditions. The novelty of this study lies in the application of an efficient CNN architecture capable of extracting spatial–temporal features end-to-end from raw EMG signals for real-time robotic control. Despite its promising results, this study is limited to four gesture classes and is sensitive to electrode placement variability. These findings provide a foundational contribution to the development of more responsive, adaptive, and easily deployable prosthetic and robotic control systems.
Copyrights © 2025