Biases in facial and vocal emotion recognition in chronic schizophrenia.

Front Psychol

EA 4712 'Behavior and Basal Ganglia' Laboratory, Université de Rennes 1 Rennes, France ; Psychiatry Unit, Guillaume Régnier Hospital Rennes, France.

Published: September 2014

There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4141280PMC
http://dx.doi.org/10.3389/fpsyg.2014.00900DOI Listing

Publication Analysis

Top Keywords

facial vocal
16
emotion recognition
12
chronic schizophrenia
12
vocal modalities
8
intensity ratings
8
biases
6
vocal
6
emotional
6
schizophrenia
5
biases facial
4

Similar Publications

Elderly Female with Cranial Polyneuropathy.

Indian J Otolaryngol Head Neck Surg

January 2025

Department of ENT, Government General Hospital, Karaikal, India.

Unlabelled: The recrudescence of Varicella Zoster Virus in the head and neck region often manifests as Ramsay Hunt Syndrome, characterised by facial nerve palsy, vesicular rash in the distribution of facial nerve and neuralgia. Rarely it causes cranial polyneuropathy (CP). We present a case of herpes zoster with CP, highlighting the diagnostic challenges and management in a resource-limited setting.

View Article and Find Full Text PDF

Objective: This study aims to evaluate the feasibility and utility of a novel, open-source 3D printed simulator for practicing laryngeal surgery skills in the clinic setting.

Study Design: Device development and validation.

Setting: A tertiary medical center.

View Article and Find Full Text PDF

Primitive audiovisual integration of speech.

Atten Percept Psychophys

March 2025

Division of Pediatric Dentistry, Saint Barnabas Hospital, Bronx, NY, USA.

An unintelligible video recording of a face uttering a sentence and an unintelligible acoustic sinusoid following the frequency variation of a single vocal resonance of the utterance were intelligible when presented together at their veridical synchrony. The intelligibility resulted from audiovisual sensory integration and phonetic perceptual analysis, which depended neither on the separate resolution of linguistic impressions in each modality nor on closed-set reports about a single pair of minimal phonemic contrast features. Likewise, audiovisual integration could not be attributed to Gestalt-derived similarity principles applied unimodally or bimodally.

View Article and Find Full Text PDF

This study focuses on how different modalities of human communication can be used to distinguish between healthy controls and subjects with schizophrenia who exhibit strong positive symptoms. We developed a multi-modal schizophrenia classification system using audio, video, and text. Facial action units and vocal tract variables were extracted as low-level features from video and audio respectively, which were then used to compute high-level coordination features that served as the inputs from the audio and video modalities.

View Article and Find Full Text PDF

Depression, a prevalent mental health disorder with severe health and economic consequences, can be costly and difficult to detect. To alleviate this burden, recent research has been exploring the depression screening capabilities of deep learning (DL) models trained on videos of clinical interviews conducted by a virtual agent. Such DL models need to consider the challenges of modality representation, alignment, and fusion as well as small sample sizes.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!