Emotion recognition from electroencephalogram (EEG) requires computational models to capture the crucial features of the emotional response to external stimulation. Spatial, spectral, and temporal information are relevant features for emotion recognition. However, learning temporal dynamics is a challenging task, and there is a lack of efficient approaches to capture such information. In this work, we present a deep learning framework called MTDN that is designed to capture spectral features with a filterbank module and to learn spatial features with a spatial convolution block. Multiple temporal dynamics are jointly learned with parallel long short-term memory (LSTM) embedding and self-attention modules. The LSTM module is used to embed the time segments, and then the self-attention is utilized to learn the temporal dynamics by intercorrelating every embedded time segment. Multiple temporal dynamics representations are then aggregated to form the final extracted features for classification. We experiment on a publicly available dataset, DEAP, to evaluate the performance of our proposed framework and compare MTDN with existing published results. The results demonstrate improvement over the current state-of-the-art methods on the valence dimension of the DEAP dataset.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC40787.2023.10340760DOI Listing

Publication Analysis

Top Keywords

temporal dynamics
20
multiple temporal
12
emotion recognition
8
temporal
6
dynamics
5
features
5
mtdn learning
4
learning multiple
4
dynamics representation
4
representation emotional
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!