An investigation of facial emotion recognition impairments in alexithymia and its neural correlates.

Behav Brain Res

Department of Psychiatry and Psychotherapy, University of Bonn, Germany; Department of Psychosomatic Medicine and Psychotherapy, LWL-University Clinic Bochum, Ruhr-University Bochum, Germany. Electronic address:

Published: September 2014

Alexithymia is a personality trait that involves difficulties identifying emotions and describing feelings. It is hypothesized that this includes facial emotion recognition but limited knowledge exists about possible neural correlates of this assumed deficit. We hence tested thirty-seven healthy subjects with either a relatively high or low degree of alexithymia (HDA versus LDA), who performed in a reliable and standardized test of facial emotion recognition (FEEL, Facially Expressed Emotion Labeling) in the functional MRI. LDA subjects had significantly better emotion recognition scores and showed relatively more activity in several brain areas associated with alexithymia and emotional awareness (anterior cingulate cortex), and the extended system of facial perception concerned with aspects of social communication and emotion (amygdala, insula, striatum). Additionally, LDA subjects had more activity in the visual area of social perception (posterior part of the superior temporal sulcus) and the inferior frontal cortex. HDA subjects, on the other hand, exhibited greater activity in the superior parietal lobule. With differences in behaviour and brain responses between two groups of otherwise healthy subjects, our results indirectly support recent conceptualizations and epidemiological data, that alexithymia is a dimensional personality trait apparent in clinically healthy subjects rather than a categorical diagnosis only applicable to clinical populations.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bbr.2014.05.069DOI Listing

Publication Analysis

Top Keywords

emotion recognition
16
facial emotion
12
healthy subjects
12
neural correlates
8
personality trait
8
lda subjects
8
emotion
6
subjects
6
alexithymia
5
investigation facial
4

Similar Publications

Determination of the Time-frequency Features for Impulse Components in EEG Signals.

Neuroinformatics

January 2025

Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, Kassel, 34132, Germany.

Accurately identifying the timing and frequency characteristics of impulse components in EEG signals is essential but limited by the Heisenberg uncertainty principle. Inspired by the visual system's ability to identify objects and their locations, we propose a new method that integrates a visual system model with wavelet analysis to calculate both time and frequency features of local impulses in EEG signals. We develop a mathematical model based on invariant pattern recognition by the visual system, combined with wavelet analysis using Krawtchouk functions as the mother wavelet.

View Article and Find Full Text PDF

Cognitive mechanisms of aversive prediction error-induced memory enhancements.

J Exp Psychol Gen

January 2025

Department of Cognitive Psychology, Institute of Psychology, Universitat Hamburg.

While prediction errors (PEs) have long been recognized as critical in associative learning, emerging evidence indicates their significant role in episodic memory formation. This series of four experiments sought to elucidate the cognitive mechanisms underlying the enhancing effects of PEs related to aversive events on memory for surrounding neutral events. Specifically, we aimed to determine whether these PE effects are specific to predictive stimuli preceding the PE or if PEs create a transient window of enhanced, unselective memory formation.

View Article and Find Full Text PDF

Biological, linguistic, and individual factors govern voice qualitya).

J Acoust Soc Am

January 2025

USC Viterbi School of Engineering, University of Southern California, Los Angeles, California 90089-1455, USA.

Voice quality serves as a rich source of information about speakers, providing listeners with impressions of identity, emotional state, age, sex, reproductive fitness, and other biologically and socially salient characteristics. Understanding how this information is transmitted, accessed, and exploited requires knowledge of the psychoacoustic dimensions along which voices vary, an area that remains largely unexplored. Recent studies of English speakers have shown that two factors related to speaker size and arousal consistently emerge as the most important determinants of quality, regardless of who is speaking.

View Article and Find Full Text PDF

Given the integration of color emotion space information from multiple feature sources in multimodal recognition systems, effectively fusing this information presents a significant challenge. This article proposes a three-dimensional (3D) color-emotion space visual feature extraction model for multimodal data integration based on an improved Gaussian mixture model to address these issues. Unlike traditional methods, which often struggle with redundant information and high model complexity, our approach optimizes feature fusion by employing entropy and visual feature sequences.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!