Motor imagery electroencephalography (EEG) decoding is an essential part of brain-computer interfaces (BCIs) which help motor-disabled patients to communicate with the outside world by external devices. Recently, deep learning algorithms using decomposed spectrums of EEG as inputs may omit important spatial dependencies and different temporal scale information, thus generated the poor decoding performance. In this paper, we propose an end-to-end EEG decoding framework, which employs raw multi-channel EEG as inputs, to boost decoding accuracy by the channel-projection mixed-scale convolutional neural network (CP-MixedNet) aided by amplitude-perturbation data augmentation. Specifically, the first block in CP-MixedNet is designed to learn primary spatial and temporal representations from EEG signals. The mixed-scale convolutional block is then used to capture mixed-scale temporal information, which effectively reduces the number of training parameters when expanding reception fields of the network. Finally, based on the features extracted in previous blocks, the classification block is constructed to classify EEG tasks. The experiments are implemented on two public EEG datasets (BCI competition IV 2a and High gamma dataset) to validate the effectiveness of the proposed approach compared to the state-of-the-art methods. The competitive results demonstrate that our proposed method is a promising solution to improve the decoding performance of motor imagery BCIs.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNSRE.2019.2915621 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!