Speech perception by cochlear implant (CI) users can be very good in quiet but their speech intelligibility (SI) performance decreases in noisy environments. Because recent studies have shown that transient parts of the speech envelope are most important for SI in normal-hearing (NH) listeners, the enhanced envelope (EE) strategy was developed to emphasize onset cues of the speech envelope in the CI signal processing chain. The influence of enhancement of the onsets of the speech envelope on SI was investigated with CI users for speech in stationary speech-shaped noise (SSN) and with an interfering talker. All CI users showed an immediate benefit when a priori knowledge was used for the onset enhancement. A SI improvement was obtained at signal-to-noise ratios (SNRs) below 6 dB, corresponding to a speech reception threshold (SRT) improvement of 2.1 dB. Furthermore, stop consonant reception was improved with the EE strategy in quiet and in SSN at 6 dB SNR. For speech in speech, the SRT improvements were 2.1 dB and 1 dB when the onsets of the target speaker with a priori knowledge of the signal components or of the mixture of the target and the interfering speaker were enhanced, respectively. The latter demonstrates that a small benefit can be obtained without a priori knowledge.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.heares.2016.09.002 | DOI Listing |
Sci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFJ Cogn Neurosci
January 2025
National Central University, Taoyuan City, Taiwan.
Pitch variation of the fundamental frequency (F0) is critical to speech understanding, especially in noisy environments. Degrading the F0 contour reduces behaviorally measured speech intelligibility, posing greater challenges for tonal languages like Mandarin Chinese where the F0 pattern determines semantic meaning. However, neural tracking of Mandarin speech with degraded F0 information in noisy environments remains unclear.
View Article and Find Full Text PDFImaging Neurosci (Camb)
April 2024
Department of Electrical Engineering, Columbia University, New York, NY, United States.
Listeners with hearing loss have trouble following a conversation in multitalker environments. While modern hearing aids can generally amplify speech, these devices are unable to tune into a target speaker without first knowing to which speaker a user aims to attend. Brain-controlled hearing aids have been proposed using auditory attention decoding (AAD) methods, but current methods use the same model to compare the speech stimulus and neural response, regardless of the dynamic overlap between talkers which is known to influence neural encoding.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Psychology, University of Lübeck, Lübeck, Germany.
Amplitude compression is an indispensable feature of contemporary audio production and especially relevant in modern hearing aids. The cortical fate of amplitude-compressed speech signals is not well-studied, however, and may yield undesired side effects: We hypothesize that compressing the amplitude envelope of continuous speech reduces neural tracking. Yet, leveraging such a 'compression side effect' on unwanted, distracting sounds could potentially support attentive listening if effectively reducing their neural tracking.
View Article and Find Full Text PDFFront Hum Neurosci
January 2025
Center for Ear-EEG, Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark.
The recent progress in auditory attention decoding (AAD) methods is based on algorithms that find a relation between the audio envelope and the neurophysiological response. The most popular approach is based on the reconstruction of the audio envelope from electroencephalogram (EEG) signals. These methods are primarily based on the exogenous response driven by the physical characteristics of the stimuli.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!