Accurately decoding motor imagery (MI) brain-computer interface (BCI) tasks has remained a challenge for both neuroscience research and clinical diagnosis. Unfortunately, less subject information and low signal-to-noise ratio of MI electroencephalography (EEG) signals make it difficult to decode the movement intentions of users. In this study, we proposed an end-to-end deep learning model, a multi-branch spectral-temporal convolutional neural network with channel attention and LightGBM model (MBSTCNN-ECA-LightGBM), to decode MI-EEG tasks. We first constructed a multi branch CNN module to learn spectral-temporal domain features. Subsequently, we added an efficient channel attention mechanism module to obtain more discriminative features. Finally, LightGBM was applied to decode the MI multi-classification tasks. The within-subject cross-session training strategy was used to validate classification results. The experimental results showed that the model achieved an average accuracy of 86% on the two-class MI-BCI data and an average accuracy of 74% on the four-class MI-BCI data, which outperformed current state-of-the-art methods. The proposed MBSTCNN-ECA-LightGBM can efficiently decode the spectral and temporal domain information of EEG, improving the performance of MI-based BCIs.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNSRE.2023.3243992 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!