Brain-Computer Interfaces (BCIs) based on motor imagery (MI) offer a direct communication pathway for assistive technologies and neurorehabilitation. A significant challenge lies in the inherent non-stationarity and inter-subject variability of Electroencephalography (EEG) signals, which limits the performance and adaptability of conventional systems. This paper proposes a novel adaptive BCI framework that leverages a hybrid Convolutional and Recurrent Neural Network (CNN-RNN) to dynamically learn spatio-temporal features from raw, multi-channel EEG data. This study aims to develop a lightweight and stable model for accurate MI classification. The model was designed for efficiency, utilizing a streamlined architecture with merely 41,860 parameters, and was rigorously evaluated on the public BCI Competition IV 2a dataset for four-class MI classification across nine subjects. The results demonstrate a robust validation accuracy of 62.17%, significantly surpassing the chance-level baseline of 25%. Crucially, the model exhibited exceptional stability, converging rapidly and maintaining consistent performance without overfitting, while also showcasing efficient computational properties. This study confirms the viability of lightweight, adaptive deep learning models in creating more reliable and practical BCIs, establishing a foundational step towards their application in clinical rehabilitation and smart device control.
Copyrights © 2025