We investigated the cortical representation of emotional prosody in normal-hearing listeners using functional near-infrared spectroscopy (fNIRS) and behavioural assessments. Consistent with previous reports, listeners relied most heavily on F0 cues when recognizing emotion cues; performance was relatively poor-and highly variable between listeners-when only intensity and speech-rate cues were available. Using fNIRS to image cortical activity to speech utterances containing natural and reduced prosodic cues, we found right superior temporal gyrus (STG) to be most sensitive to emotional prosody, but no emotion-specific cortical activations, suggesting that while fNIRS might be suited to investigating cortical mechanisms supporting speech processing it is less suited to investigating cortical haemodynamic responses to individual vocal emotions. Manipulating emotional speech to render F0 cues less informative, we found the amplitude of the haemodynamic response in right STG to be significantly correlated with listeners' abilities to recognise vocal emotions with uninformative F0 cues. Specifically, listeners more able to assign emotions to speech with degraded F0 cues showed lower haemodynamic responses to these degraded signals. This suggests a potential objective measure of behavioural sensitivity to vocal emotions that might benefit neurodiverse populations less sensitive to emotional prosody or hearing-impaired listeners, many of whom rely on listening technologies such as hearing aids and cochlear implants-neither of which restore, and often further degrade, the F0 cues essential to parsing emotional prosody conveyed in speech.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10203806PMC
http://dx.doi.org/10.1002/hbm.26305DOI Listing

Publication Analysis

Top Keywords

vocal emotions
16
emotional prosody
16
haemodynamic responses
12
cues
9
cortical haemodynamic
8
recognise vocal
8
emotions uninformative
8
sensitive emotional
8
suited investigating
8
investigating cortical
8

Similar Publications

Purpose: We investigate the extent to which automated audiovisual metrics extracted during an affect production task show statistically significant differences between a cohort of children diagnosed with autism spectrum disorder (ASD) and typically developing controls.

Method: Forty children with ASD and 21 neurotypical controls interacted with a multimodal conversational platform with a virtual agent, Tina, who guided them through tasks prompting facial and vocal communication of four emotions-happy, angry, sad, and afraid-under conditions of high and low verbal and social cognitive task demands.

Results: Individuals with ASD exhibited greater standard deviation of the fundamental frequency of the voice with the minima and maxima of the pitch contour occurring at an earlier time point as compared to controls.

View Article and Find Full Text PDF

Epilepsy Aphasia Syndrome (EAS) is a spectrum of childhood disorders that exhibit complex co-morbidities that include epilepsy and the emergence of cognitive and language disorders. CNKSR2 is an X-linked gene in which mutations are linked to EAS. We previously demonstrated Cnksr2 knockout (KO) mice model key phenotypes of EAS analogous to those present in clinical patients with mutations in the gene.

View Article and Find Full Text PDF

Behavioral contagion is widespread in primates, with yawn contagion (YC) being a well-known example. Often associated with ingroup dynamics and synchronization, the possible functions and evolutionary pathways of YC remain subjects of active debate. Among nonhuman animals, geladas (Theropithecus gelada) are the only species known to occasionally emit a distinct vocalization while yawning.

View Article and Find Full Text PDF

Introduction: Recently, studies have suggested a role of motor symptom asymmetry on impaired emotional recognition abilities in Parkinson's disease with a greater vulnerability in patients with a predominance of left-sided symptoms. However, none of them explored the interaction between motor symptom asymmetry and dopamine replacement therapy in different stages of the disease.

Methodology: We explored the recognition of vocal emotion (i.

View Article and Find Full Text PDF

Facial mimicry of visually observed emotional facial actions is a robust phenomenon. Here, we examined whether such facial mimicry extends to auditory emotional stimuli. We also examined if participants' facial responses differ to sounds that are more strongly associated with congruent facial movements, such as vocal emotional expressions (e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!