Adaptation to vocal expressions reveals multistep perception of auditory emotion.

J Neurosci

Institut de Neurosciences de La Timone, Unité Mixte de Recherche 7289, Aix-Marseille Université, Centre National de la Recherche Scientifique, 13385 Marseille, France, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom, and International Laboratory for Brain, Music and Sound Research, University of Montréal/McGill University, Montréal H3C 3J7, Canada.

Published: June 2014

The human voice carries speech as well as important nonlinguistic signals that influence our social interactions. Among these cues that impact our behavior and communication with other people is the perceived emotional state of the speaker. A theoretical framework for the neural processing stages of emotional prosody has suggested that auditory emotion is perceived in multiple steps (Schirmer and Kotz, 2006) involving low-level auditory analysis and integration of the acoustic information followed by higher-level cognition. Empirical evidence for this multistep processing chain, however, is still sparse. We examined this question using functional magnetic resonance imaging and a continuous carry-over design (Aguirre, 2007) to measure brain activity while volunteers listened to non-speech-affective vocalizations morphed on a continuum between anger and fear. Analyses dissociated neuronal adaptation effects induced by similarity in perceived emotional content between consecutive stimuli from those induced by their acoustic similarity. We found that bilateral voice-sensitive auditory regions as well as right amygdala coded the physical difference between consecutive stimuli. In contrast, activity in bilateral anterior insulae, medial superior frontal cortex, precuneus, and subcortical regions such as bilateral hippocampi depended predominantly on the perceptual difference between morphs. Our results suggest that the processing of vocal affect recognition is a multistep process involving largely distinct neural networks. Amygdala and auditory areas predominantly code emotion-related acoustic information while more anterior insular and prefrontal regions respond to the abstract, cognitive representation of vocal affect.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4051968PMC
http://dx.doi.org/10.1523/JNEUROSCI.4820-13.2014DOI Listing

Publication Analysis

Top Keywords

auditory emotion
8
perceived emotional
8
consecutive stimuli
8
vocal affect
8
auditory
5
adaptation vocal
4
vocal expressions
4
expressions reveals
4
reveals multistep
4
multistep perception
4

Similar Publications

Unlabelled: : The clinical trial Effect of Modulated Auditory Stimulation on Interaural Auditory Perception (NCT0544189) aimed to determine whether an auditory intervention (AI)-"Bérard in 10"-can enhance the effect of standard therapies for people with anxiety and/or depression. : Design: unblinded, randomized, controlled clinical trial.

Location: Mejorada del Campo Health Centre, Madrid (Primary Care).

View Article and Find Full Text PDF

The extraction and analysis of pitch underpin speech and music recognition, sound segregation, and other auditory tasks. Perceptually, pitch can be represented as a helix composed of two factors: height monotonically aligns with frequency, while chroma cyclically repeats at doubled frequencies. Although the early perceptual and neurophysiological mechanisms for extracting pitch from acoustic signals have been extensively investigated, the equally essential subsequent stages that bridge to high-level auditory cognition remain less well understood.

View Article and Find Full Text PDF

Establishing the effect of limited English proficiency (LEP) on cognitive performance within linguistically diverse populations is central to cross-cultural neuropsychological assessments. The present study was designed to replicate previous research on cognitive profiles in Romanian-English bilinguals. Seventy-six participants (54 women, MAge = 23.

View Article and Find Full Text PDF

In cognitive science, the sensation of "groove" has been defined as the pleasurable urge to move to music. When listeners rate rhythmic stimuli on derived pleasure and urge to move, ratings on these dimensions are highly correlated. However, recent behavioural and brain imaging work has shown that these two components may be separable.

View Article and Find Full Text PDF

Making a Difference from Day One: The Urgent Need for Universal Neonatal Hearing Screening.

Children (Basel)

December 2024

Department of Audiology, Otology, Neurotology & Cochlear Implant Unit, Athens Pediatric Center, 15125 Athens, Greece.

Neonatal hearing screening (NHS) is a critical public health measure for early identification of hearing loss, ensuring timely access to interventions that can dramatically improve a child's language development, cognitive abilities, and social inclusion. Beyond clinical benefits, NHS provides long-term advantages in education and quality of life. Given that congenital hearing loss affects approximately 1-2 in every 1000 newborns worldwide, the case for universal screening is clear.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!