In this paper, a novel EEG emotion recognition method based on residual graph attention neural network is proposed. The method constructs a three-dimensional sparse feature matrix according to the relative position of electrode channels, and inputs it into the residual network to extract high-level abstract features containing electrode spatial position information. At the same time, the adjacency matrix representing the connection relationship of electrode channels is constructed, and the time-domain features of multi-channel EEG are modeled using graph. Then, the graph attention neural network is utilized to learn the intrinsic connection relationship between EEG channels located in different brain regions from the adjacency matrix and the constructed graph structure data. Finally, the high-level abstract features extracted from the two networks are fused to judge the emotional state. The experiment is carried out on DEAP data set. The experimental results show that the spatial domain information of electrode channels and the intrinsic connection relationship between different channels contain salient information related to emotional state, and the proposed model can effectively fuse these information to improve the performance of multi-channel EEG emotion recognition.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10407101 | PMC |
http://dx.doi.org/10.3389/fnins.2023.1135850 | DOI Listing |
Wearable Technol
November 2024
Embedded Systems and Robotics Lab, Tezpur University, Tezpur, Assam, India.
Electromyogram (EMG) has been a fundamental approach for prosthetic hand control. However it is limited by the functionality of residual muscles and muscle fatigue. Currently, exploring temporal shifts in brain networks and accurately classifying noninvasive electroencephalogram (EEG) for prosthetic hand control remains challenging.
View Article and Find Full Text PDFInt J Audiol
January 2025
Department of Neurosciences, Research Group ExpORL, KU Leuven, Leuven, Belgium.
Objective: Auditory-steady state responses (ASSRs) to stimuli modulated by different frequencies may differ between children and adults. These differences in response characteristics or latency may reflect developmental changes. This study investigates age-related differences in response strength, latencies, and hemispheric laterality indices of ASSRs for different modulation frequencies.
View Article and Find Full Text PDFSensors (Basel)
December 2024
School of Electrical Engineering, University of Belgrade, 11000 Belgrade, Serbia.
Traditional tactile brain-computer interfaces (BCIs), particularly those based on steady-state somatosensory-evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI control signals: somatosensory event-related potentials (sERPs). This study aimed to optimize the performance of a novel electrotactile BCI by employing advanced feature extraction and machine learning techniques on sERP signals for the classification of users' selective tactile attention.
View Article and Find Full Text PDFBrain Sci
December 2024
West China Institute of Children's Brain and Cognition, Chongqing University of Education, Chongqing 400065, China.
Background: Emotions play a crucial role in people's lives, profoundly affecting their cognition, decision-making, and interpersonal communication. Emotion recognition based on brain signals has become a significant challenge in the fields of affective computing and human-computer interaction.
Methods: Addressing the issue of inaccurate feature extraction and low accuracy of existing deep learning models in emotion recognition, this paper proposes a multi-channel automatic classification model for emotion EEG signals named DACB, which is based on dual attention mechanisms, convolutional neural networks, and bidirectional long short-term memory networks.
Brain Sci
November 2024
Department of Neurology, Beth Isreal Deaconess Medical Center, Harvard Medical School, Harvard University, Cambridge, MA 02215, USA.
: Manually labeling sleep stages is time-consuming and labor-intensive, making automatic sleep staging methods crucial for practical sleep monitoring. While both single- and multi-channel data are commonly used in automatic sleep staging, limited research has adequately investigated the differences in their effectiveness. In this study, four public data sets-Sleep-SC, APPLES, SHHS1, and MrOS1-are utilized, and an advanced hybrid attention neural network composed of a multi-branch convolutional neural network and the multi-head attention mechanism is employed for automatic sleep staging.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!