Listening to degraded speech is associated with decreased intelligibility and increased effort. However, listeners are generally able to adapt to certain types of degradations. While intelligibility of degraded speech is modulated by talker acoustics, it is unclear whether talker acoustics also affect effort and adaptation. Moreover, it has been demonstrated that talker differences are preserved across spectral degradations, but it is not known whether this effect extends to temporal degradations and which acoustic-phonetic characteristics are responsible. In a listening experiment combined with pupillometry, participants were presented with speech in quiet as well as in masking noise, time-compressed, and noise-vocoded speech by 16 Southern British English speakers. Results showed that intelligibility, but not adaptation, was modulated by talker acoustics. Talkers who were more intelligible under noise-vocoding were also more intelligible under masking and time-compression. This effect was linked to acoustic-phonetic profiles with greater vowel space dispersion (VSD) and energy in mid-range frequencies, as well as slower speaking rate. While pupil dilation indicated increasing effort with decreasing intelligibility, this study also linked reduced effort in quiet to talkers with greater VSD. The results emphasize the relevance of talker acoustics for intelligibility and effort in degraded listening conditions.

Download full-text PDF

Source
http://dx.doi.org/10.1121/10.0001212DOI Listing

Publication Analysis

Top Keywords

talker acoustics
20
acoustics intelligibility
8
intelligibility effort
8
effort degraded
8
degraded listening
8
listening conditions
8
degraded speech
8
modulated talker
8
intelligibility
6
effort
6

Similar Publications

Prosodic Modifications to Challenging Communicative Environments in Preschoolers.

Lang Speech

January 2025

Department of Educational Psychology, Leadership, & Counseling, Texas Tech University, USA.

Adapting one's speaking style is particularly crucial as children start interacting with diverse conversational partners in various communication contexts. The study investigated the capacity of preschool children aged 3-5 years ( = 28) to modify their speaking styles in response to background noise, referred to as noise-adapted speech, and when talking to an interlocutor who pretended to have hearing loss, referred to as clear speech. We examined how two modified speaking styles differed across the age range.

View Article and Find Full Text PDF

Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.

View Article and Find Full Text PDF

The effect of speech masking on the human subcortical response to continuous speech.

bioRxiv

December 2024

Kresge Hearing Research Institute, Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI.

Unlabelled: Auditory masking-the interference of the encoding and processing of an acoustic stimulus imposed by one or more competing stimuli-is nearly omnipresent in daily life, and presents a critical barrier to many listeners, including people with hearing loss, users of hearing aids and cochlear implants, and people with auditory processing disorders. The perceptual aspects of masking have been actively studied for several decades, and particular emphasis has been placed on masking of speech by other speech sounds. The neural effects of such masking, especially at the subcortical level, have been much less studied, in large part due to the technical limitations of making such measurements.

View Article and Find Full Text PDF

Objectives: Speech intelligibility is supported by the sound of a talker's voice and visual cues related to articulatory movements. The relative contribution of auditory and visual cues to an integrated audiovisual percept varies depending on a listener's environment and sensory acuity. Cochlear implant users rely more on visual cues than those with acoustic hearing to help compensate for the fact that the auditory signal produced by their implant is poorly resolved relative to that of the typically developed cochlea.

View Article and Find Full Text PDF

This study investigated the effects of noise and hearing impairment on conversational dynamics between pairs of young normal-hearing and older hearing-impaired interlocutors. Twelve pairs of normal-hearing and hearing-impaired individuals completed a spot-the-difference task in quiet and in three levels of multitalker babble. To achieve the rapid response timing of turn taking that has been observed in normal conversations, people must simultaneously comprehend incoming speech, plan a response, and predict when their partners will end their turn.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!