Decoding emotional states from human brain activity play an important role in the brain-computer interfaces. Existing emotion decoding methods still have two main limitations: one is only decoding a single emotion category from a brain activity pattern and the decoded emotion categories are coarse-grained, which is inconsistent with the complex emotional expression of humans; the other is ignoring the discrepancy of emotion expression between the left and right hemispheres of the human brain. In this article, we propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding (up to 80 emotion categories) which can learn the expressive neural representations and predict multiple emotional states simultaneously. Specifically, the generative component of our hybrid model is parameterized by a multi-view variational autoencoder, in which we regard the brain activity of left and right hemispheres and their difference as three distinct views and use the product of expert mechanism in its inference network. The discriminative component of our hybrid model is implemented by a multi-label classification network with an asymmetric focal loss. For more accurate emotion decoding, we first adopt a label-aware module for emotion-specific neural representation learning and then model the dependency of emotional states by a masked self-attention mechanism. Extensive experiments on two visually evoked emotional datasets show the superiority of our method.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2022.3217767DOI Listing

Publication Analysis

Top Keywords

emotion decoding
16
brain activity
16
human brain
12
emotional states
12
hybrid model
12
multi-view multi-label
8
emotion
8
fine-grained emotion
8
emotion categories
8
left hemispheres
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!