Objective: Among all BCI paradigms, motion imagery (MI) has gained favor among researchers because it allows users to control external devices by imagining movements rather than actually performing actions. This property holds important promise for clinical applications, especially in areas such as stroke rehabilitation. Electroencephalogram (EEG) signals and functional near-infrared spectroscopy (fNIRS) signals are two of the more popular neuroimaging techniques for obtaining MI signals from the brain. However, the performance of MI-based unimodal classification methods is low due to the limitations of EEG or fNIRS.

Approach: In this paper, we propose a new multimodal fusion classification method capable of combining the potential complementary advantages of EEG and fNIRS. First, we propose a feature extraction network capable of extracting spatio-temporal features from EEG-based and fNIRS-based MI signals. Then, we successively fused the EEG and fNIRS at the feature-level and the decision-level to improve the adaptability and robustness of the model.

Main Results: We validate the performance of ECA-FusionNet on a publicly available EEG-fNIRS dataset. The results show that ECA-FusionNet outperforms unimodal classification methods, as well as existing fusion classification methods, in terms of classification accuracy for MI.

Significance: ECA-FusionNet may provide a useful reference for the field of multimodal fusion classification.

Download full-text PDF

Source
http://dx.doi.org/10.1088/1741-2552/adaf58DOI Listing

Publication Analysis

Top Keywords

classification methods
12
fusion classification
12
unimodal classification
8
multimodal fusion
8
eeg fnirs
8
classification
7
signals
5
eca-fusionnet
4
eca-fusionnet hybrid
4
hybrid eeg-fnirs
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!