Changing the balance between the early and late reflections in the impulse response affects the clarity of speech, and also the spatial perception of the sound source is affected when the direction of the early reflections is manipulated. While the effect of noise on early reflections has long been investigated in speech intelligibility studies, it is unclear whether and how the spatial characteristics of the source are altered by noise, and whether this would influence speech intelligibility in any way. The aim of the present work was to analyze the spatial perception of a speech source in noise and its relationship, if any, with speech intelligibility. Impulse responses with specular or scattered early reflections and two different reverberant tails were used to create sound fields with controlled clarity and reverberation. It emerged that noise affects spatial cues compared to the reverberation-only (quiet) condition; ratings are consequently changed, and most percepts are distorted. Speech intelligibility is also sensitive to changes in acoustic variables and the type of reflection, but the direct association between spatial percepts and speech intelligibility is weak.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/10.0011403 | DOI Listing |
Hear Res
January 2025
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom.
The cortical tracking of the acoustic envelope is a phenomenon where the brain's electrical activity, as recorded by electroencephalography (EEG) signals, fluctuates in accordance with changes in stimulus intensity (the acoustic envelope of the stimulus). Understanding speech in a noisy background is a key challenge for people with hearing impairments. Speech stimuli are therefore more ecologically valid than clicks, tone pips, or speech tokens (e.
View Article and Find Full Text PDFTrends Hear
January 2025
Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding.
View Article and Find Full Text PDFPLoS One
January 2025
Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America.
Binaural speech intelligibility in rooms is a complex process that is affected by many factors including room acoustics, hearing loss, and hearing aid (HA) signal processing. Intelligibility is evaluated in this paper for a simulated room combined with a simulated hearing aid. The test conditions comprise three spatial configurations of the speech and noise sources, simulated anechoic and concert hall acoustics, three amounts of multitalker babble interference, the hearing status of the listeners, and three degrees of simulated HA processing provided to compensate for the noise and/or hearing loss.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Centre for Language Studies, Radboud University, Nijmegen, the Netherlands.
Purpose: In this review article, we present an extensive overview of recent developments in the area of dysarthric speech research. One of the key objectives of speech technology research is to improve the quality of life of its users, as evidenced by the focus of current research trends on creating inclusive conversational interfaces that cater to pathological speech, out of which dysarthric speech is an important example. Applications of speech technology research for dysarthric speech demand a clear understanding of the acoustics of dysarthric speech as well as of speech technologies, including machine learning and deep neural networks for speech processing.
View Article and Find Full Text PDFTrends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!