Emotion recognition is one of the most important issues in human-computer interaction (HCI), neuroscience, and psychology fields. It is generally accepted that emotion recognition with neural data such as electroencephalography (EEG) signals, functional magnetic resonance imaging (fMRI), and near-infrared spectroscopy (NIRS) is better than other emotion detection methods such as speech, mimics, body language, facial expressions, etc., in terms of reliability and accuracy. In particular, EEG signals are bioelectrical signals that are frequently used because of the many advantages they offer in the field of emotion recognition. This study proposes an improved approach for EEG-based emotion recognition on a publicly available newly published dataset, VREED. Differential entropy (DE) features were extracted from four wavebands (theta 4-8 Hz, alpha 8-13 Hz, beta 13-30 Hz, and gamma 30-49 Hz) to classify two emotional states (positive/negative). Five classifiers, namely Support Vector Machine (SVM), k-Nearest Neighbor (kNN), Naïve Bayesian (NB), Decision Tree (DT), and Logistic Regression (LR) were employed with DE features for the automated classification of two emotional states. In this work, we obtained the best average accuracy of 76.22% ± 2.06 with the SVM classifier in the classification of two states. Moreover, we observed from the results that the highest average accuracy score was produced with the gamma band, as previously reported in studies in EEG-based emotion recognition.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9601226 | PMC |
http://dx.doi.org/10.3390/diagnostics12102508 | DOI Listing |
Indian J Psychol Med
January 2025
Dept. of Psychiatry, VMMC and Safdarjung Hospital, New Delhi, India.
Background: Facial emotion recognition is one of the significant domains of social cognition that underlie social interactions. These deficits can influence the functional outcome in individuals with schizophrenia by impairing judgment toward others and reducing their capability to function. We aimed to assess the facial emotion recognition deficits in individuals with schizophrenia in comparison to healthy individuals and find their association with clinical and demographic profiles.
View Article and Find Full Text PDFBehav Res Methods
January 2025
Department of Psychology, University of Quebec at Trois-Rivières, Trois-Rivières, Canada.
Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture).
View Article and Find Full Text PDFJMIR Form Res
December 2024
Department of Child and Adolescent Psychiatry, Schneider Children's Medical Center, Petach Tikvah, Israel.
Background: The prevalence of mental health disorders among children and adolescents presents a significant public health challenge. Children exposed to armed conflicts are at a particularly high risk of developing mental health problems, necessitating prompt and robust intervention. The acute need for early intervention in these situations is well recognized, as timely support can mitigate long-term negative outcomes.
View Article and Find Full Text PDFScand J Psychol
January 2025
Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark.
The concept of social invisibility describes the devaluation of the perceived social and personal worth of an individual. This paper presents the theoretical foundation for this construct, and the development and validation of the "Invisibility Scale" capturing experiences of and needs for social (in)visibility within (i) intimate, (ii) legal, and (iii) communal relations. We developed and validated the Invisibility Scale in two studies.
View Article and Find Full Text PDFCogn Emot
January 2025
Department of Psychology, University of Wisconsin - Madison, Madison, WI, USA.
People routinely use facial expressions to communicate successfully and to regulate other's behaviour, yet modelling the form and meaning of these facial behaviours has proven surprisingly complex. One reason for this difficulty may lie in an over-reliance on the assumptions inherent in existing theories of facial expression - specifically that (1) there is a putative set of facial expressions that signal an internal state of emotion, (2) patterns of facial movement have been empirically linked to the prototypical emotions in this set, and (3) static, non-social, posed images from convenience samples are adequate to validate the first two assumptions. These assumptions have guided the creation of datasets, which are then used to train unrepresentative computational models of facial expression.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!