Facial Electromyography (EMG) signals offer a promising modality for intuitive human-machine interfaces (HMIs). The development of robust control systems, however, remains challenging in view of the inherent complexity, noise susceptibility, and significant inter-subject variability of EMG signals in the facial region. This study addresses these technical challenges by developing and validating an optimized Deep Learning framework for facial gesture recognition. The primary objective of this study is to create a reliable classification model for five essential facial movements: 'Rest', 'Smile', 'Eyebrow Raise', 'Right Lip Movement', and 'Left Lip Movement'. The model will serve as precise control inputs for assistive devices. The proposed methodology employs a systematic workflow comprising signal preprocessing (filtering, normalization, and segmentation) followed by the automated hyperparameter optimization of a one-dimensional (1D) Convolutional Neural Network (CNN). The experimental results demonstrate that the optimized model achieved a classification accuracy of 90% on internal test data, with the learning rate identified as the most critical hyperparameter influencing performance. Furthermore, validation of the model on entirely new participants yielded an accuracy of 71%. While this result underscores the persistent challenge of generalizing across different users, it establishes a reliable baseline. Ultimately, this work provides a validated, optimization-based framework that utilizes low-cost instrumentation, thereby offering a substantial pathway towards more accessible and personalized hands-free assistive technologies to restore autonomy for individuals with severe motor impairments.
Copyrights © 2025