Emotion recognition is a critical research topic within affective computing, with potential applications across various domains. Currently, EEG-based emotion recognition, utilizing deep learning frameworks, has been effectively applied and achieved commendable performance. However, existing deep learning-based models face challenges in capturing both the spatial activity features and spatial topology features of EEG signals simultaneously. To address this challenge, a omain-adaptation patial-feature erception-network has been proposed for cross-subject EEG emotion recognition tasks, named DSP-EmotionNet. Firstly, a patial ctivity opological eature xtractor odule has been designed to capture spatial activity features and spatial topology features of EEG signals, named SATFEM. Then, using SATFEM as the feature extractor, DSP-EmotionNet has been designed, significantly improving the accuracy of the model in cross-subject EEG emotion recognition tasks. The proposed model surpasses state-of-the-art methods in cross-subject EEG emotion recognition tasks, achieving an average recognition accuracy of 82.5% on the SEED dataset and 65.9% on the SEED-IV dataset.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685119 | PMC |
http://dx.doi.org/10.3389/fnhum.2024.1471634 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!