This study investigated the influence of hearing-aid (HA) and cochlear-implant (CI) processing on consonant perception in normal-hearing (NH) listeners. Measured data were compared to predictions obtained with a speech perception model [Zaar and Dau (2017). J. Acoust. Soc. Am. 141, 1051-1064] that combines an auditory processing front end with a correlation-based template-matching back end. In terms of HA processing, effects of strong nonlinear frequency compression and impulse-noise suppression were measured in 10 NH listeners using consonant-vowel stimuli. Regarding CI processing, the consonant perception data from DiNino et al. [(2016). J. Acoust. Soc. Am. 140, 4404-4418] were considered, which were obtained with noise-vocoded vowel-consonant-vowel stimuli in 12 NH listeners. The inputs to the model were the same stimuli as were used in the corresponding experiments. The model predictions obtained for the two data sets showed a large agreement with the perceptual data both in terms of consonant recognition and confusions, demonstrating the model's sensitivity to supra-threshold effects of hearing-instrument signal processing on consonant perception. The results could be useful for the evaluation of hearing-instrument processing strategies, particularly when combined with simulations of individual hearing impairment.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.5011737 | DOI Listing |
J Acoust Soc Am
January 2025
Department of Electronics Engineering, Pusan National University, Busan, South Korea.
The amount of information contained in speech signals is a fundamental concern of speech-based technologies and is particularly relevant in speech perception. Measuring the mutual information of actual speech signals is non-trivial, and quantitative measurements have not been extensively conducted to date. Recent advancements in machine learning have made it possible to directly measure mutual information using data.
View Article and Find Full Text PDFFront Hum Neurosci
December 2024
Ph.D. Program in Speech-Language-Hearing Sciences, The Graduate Center, The City University of New York Graduate Center, New York, NY, United States.
Introduction: Lateral temporal neural measures (Na and T-complex Ta and Tb) of the auditory evoked potential (AEP) index auditory/speech processing and have been observed in children and adults. While Na is already present in children under 4 years of age, Ta emerges from 4 years of age, and Tb appears even later. The T-complex has been found to be sensitive to language experience in Spanish-English and Turkish-German children and adults.
View Article and Find Full Text PDFFront Neurosci
December 2024
Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China.
Background: Cochlear implants (CIs) have the potential to facilitate auditory restoration in deaf children and contribute to the maturation of the auditory cortex. The type of CI may impact hearing rehabilitation in children with CI. We aimed to study central auditory processing activation patterns during speech perception in Mandarin-speaking pediatric CI recipients with different device characteristics.
View Article and Find Full Text PDFJ Voice
December 2024
Neurology Department II, Fuyang People's Hospital, Fuyang, China. Electronic address:
Purpose: Parkinson disease (PD) is a progressive neurodegenerative disease. The aim of this study is to investigate the association between acoustic and cortical brain features in Parkinson's disease patients.
Methods: We recruited 19 (eight females, 11 males) Parkinson's disease patients and 19 (eight females, 11 males) healthy subjects to participate in the experiment.
Cogn Sci
December 2024
Université Côte d'Azur, CNRS, BCL.
In this paper, we explore the effect of musical expertise on whistled word perception by naive listeners. In whistled words of nontonal languages, vowels are transposed to relatively stable pitches, while consonants are translated into pitch movements or interruptions. Previous behavioral studies have demonstrated that naive listeners can categorize isolated consonants, vowels, and words well over chance.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!