Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3741276 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0070648 | PLOS |
Hear Res
January 2025
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom.
The cortical tracking of the acoustic envelope is a phenomenon where the brain's electrical activity, as recorded by electroencephalography (EEG) signals, fluctuates in accordance with changes in stimulus intensity (the acoustic envelope of the stimulus). Understanding speech in a noisy background is a key challenge for people with hearing impairments. Speech stimuli are therefore more ecologically valid than clicks, tone pips, or speech tokens (e.
View Article and Find Full Text PDFAm J Audiol
January 2025
Department of Communication Sciences and Disorders, University of Wisconsin-Madison.
Purpose: Prior work estimating sound exposure dose from earphone use has typically measured earphone use time with retrospective questionnaires or device-based tracking, both of which have limitations. This research note presents an exploratory analysis of sound exposure dose from earphone use among college-aged adults using real-ear measures to estimate exposure level and ecological momentary assessment (EMA) to estimate use time.
Method: Earphone levels were measured at the ear drum of 53 college students using their own devices, earphones, and preferred music and speech stimuli at their normal listening volume.
eNeuro
January 2025
Paris-Lodron-University of Salzburg, Department of Psychology, Centre for Cognitive Neuroscience, Salzburg, Austria
Observing lip movements of a speaker facilitates speech understanding, especially in challenging listening situations. Converging evidence from neuroscientific studies shows stronger neural responses to audiovisual stimuli compared to audio-only stimuli. However, the interindividual variability of this contribution of lip movement information and its consequences on behavior are unknown.
View Article and Find Full Text PDFPLoS One
January 2025
Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America.
Binaural speech intelligibility in rooms is a complex process that is affected by many factors including room acoustics, hearing loss, and hearing aid (HA) signal processing. Intelligibility is evaluated in this paper for a simulated room combined with a simulated hearing aid. The test conditions comprise three spatial configurations of the speech and noise sources, simulated anechoic and concert hall acoustics, three amounts of multitalker babble interference, the hearing status of the listeners, and three degrees of simulated HA processing provided to compensate for the noise and/or hearing loss.
View Article and Find Full Text PDFActa Otolaryngol
January 2025
School of Audiology and Speech Language Pathology, Bharati Vidyapeeth (Deemed to be University), Pune, India.
Background: Meniere's disease (MD) affects 0.2% to 0.5% of the global population, with regional variations.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!