How do humans discriminate emotion from non-emotion? The specific psychophysical cues and neural responses involved with resolving emotional information in sound are unknown. In this study we used a discrimination psychophysical-fMRI sparse sampling paradigm to locate threshold responses to happy and sad acoustic stimuli. The fine structure and envelope of auditory signals were covaried to vary emotional certainty. We report that emotion identification at threshold in music utilizes fine structure cues. The auditory cortex was activated but did not vary with emotional uncertainty. Amygdala activation was modulated by emotion identification and was absent when emotional stimuli were chance identifiable, especially in the left hemisphere. The right hemisphere amygdala was considerably more deactivated in response to uncertain emotion. The threshold of emotion was signified by a right amygdala deactivation and change of left amygdala greater than right amygdala activation. Functional sex differences were noted during binaural uncertain emotional stimuli presentations, where the right amygdala showed larger activation in females. Negative control (silent stimuli) experiments investigated sparse sampling of silence to ensure modulation effects were inherent to emotional resolvability. No functional modulation of Heschl's gyrus occurred during silence; however, during rest the amygdala baseline state was asymmetrically lateralized. The evidence indicates changing hemispheric activation and deactivation patterns between the left and right amygdala is a hallmark feature of discriminating emotion from non-emotion in music.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6794305PMC
http://dx.doi.org/10.1038/s41598-019-50042-1DOI Listing

Publication Analysis

Top Keywords

auditory cortex
8
uncertain emotion
8
sparse sampling
8
fine structure
8
vary emotional
8
emotion identification
8
amygdala
8
amygdala activation
8
emotional stimuli
8
left amygdala
8

Similar Publications

Hemispheric difference of adaptation lifetime in human auditory cortex measured with MEG.

Hear Res

December 2024

Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany; Department of Psychology, Lancaster University, Lancaster, UK.

Adaptation is the attenuation of a neuronal response when a stimulus is repeatedly presented. The phenomenon has been linked to sensory memory, but its exact neuronal mechanisms are under debate. One defining feature of adaptation is its lifetime, that is, the timespan over which the attenuating effect of previous stimulation persists.

View Article and Find Full Text PDF

Phantom perceptions like tinnitus occur without any identifiable environmental or bodily source. The mechanisms and key drivers behind tinnitus are poorly understood. The dominant framework, suggesting that tinnitus results from neural hyperactivity in the auditory pathway following hearing damage, has been difficult to investigate in humans and has reached explanatory limits.

View Article and Find Full Text PDF

Cognitive processes such as action planning and decision-making require the integration of multiple sensory modalities in response to temporal cues, yet the underlying mechanism is not fully understood. Sleep has a crucial role for memory consolidation and promoting cognitive flexibility. Our aim is to identify the role of sleep in integrating different modalities to enhance cognitive flexibility and temporal task execution while identifying the specific brain regions that mediate this process.

View Article and Find Full Text PDF

Background: Migraine is a neurological disorder characterized by severe, unilateral, pulsating headaches with visual, olfactory, and auditory hypersensitivity, as well as autonomic symptoms. Currently, triptans are the standard treatment, but they often fail to relieve symptoms. Herbal medicines are alternative treatments to overcome these limitations.

View Article and Find Full Text PDF

Perceptual learning of modulation filtered speech.

J Exp Psychol Hum Percept Perform

January 2025

School of Psychology, University of Sussex.

Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!