The loudness of a tone can be reduced by preceding it with a more intense tone. This effect, known as induced loudness reduction (ILR), has been reported to last for several seconds. The underlying neural mechanisms are unknown. One possible contributor to the effect involves changes in cochlear gain via the medial olivocochlear (MOC) efferents. Since cochlear implants (CIs) bypass the cochlea, investigating whether and how CI users experience ILR should help provide a better understanding of the underlying mechanisms. In the present study, ILR was examined in both normal-hearing listeners and CI users by examining the effects of an intense precursor (50 or 500 ms) on the loudness of a 50-ms target, as judged by comparing it to a spectrally remote 50-ms comparison sound. The interstimulus interval (ISI) between the precursor and the target was varied between 10 and 1000 ms to estimate the time course of ILR. In general, the patterns of results from the CI users were similar to those found in the normal-hearing listeners. However, in the short-precursor short-ISI condition, an enhancement in the loudness of target was observed in CI subjects that was not present in the normal-hearing listeners, consistent with the effects of an additional attenuation present in the normal-hearing listeners but not in the CI users. The results suggest that the MOC may play a role but that it is not the only source of these loudness context effects.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4940285 | PMC |
http://dx.doi.org/10.1007/s10162-016-0563-y | DOI Listing |
Hear Res
January 2025
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom.
The cortical tracking of the acoustic envelope is a phenomenon where the brain's electrical activity, as recorded by electroencephalography (EEG) signals, fluctuates in accordance with changes in stimulus intensity (the acoustic envelope of the stimulus). Understanding speech in a noisy background is a key challenge for people with hearing impairments. Speech stimuli are therefore more ecologically valid than clicks, tone pips, or speech tokens (e.
View Article and Find Full Text PDFeNeuro
January 2025
Paris-Lodron-University of Salzburg, Department of Psychology, Centre for Cognitive Neuroscience, Salzburg, Austria
Observing lip movements of a speaker facilitates speech understanding, especially in challenging listening situations. Converging evidence from neuroscientific studies shows stronger neural responses to audiovisual stimuli compared to audio-only stimuli. However, the interindividual variability of this contribution of lip movement information and its consequences on behavior are unknown.
View Article and Find Full Text PDFTrends Hear
January 2025
Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding.
View Article and Find Full Text PDFBrain Commun
January 2025
Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria.
Former studies have established that individuals with a cochlear implant (CI) for treating single-sided deafness experience improved speech processing after implantation. However, it is not clear how each ear contributes separately to improve speech perception over time at the behavioural and neural level. In this longitudinal EEG study with four different time points, we measured neural activity in response to various temporally and spectrally degraded spoken words presented monaurally to the CI and non-CI ears (5 left and 5 right ears) in 10 single-sided CI users and 10 age- and sex-matched individuals with normal hearing.
View Article and Find Full Text PDFPLoS One
January 2025
Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America.
Binaural speech intelligibility in rooms is a complex process that is affected by many factors including room acoustics, hearing loss, and hearing aid (HA) signal processing. Intelligibility is evaluated in this paper for a simulated room combined with a simulated hearing aid. The test conditions comprise three spatial configurations of the speech and noise sources, simulated anechoic and concert hall acoustics, three amounts of multitalker babble interference, the hearing status of the listeners, and three degrees of simulated HA processing provided to compensate for the noise and/or hearing loss.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!