Hearing aids provide nonlinear amplification to improve speech audibility and loudness perception. While more audibility typically increases speech intelligibility at low levels, the same is not true for above-conversational levels, where decreases in intelligibility ("rollover") can occur. In a previous study, we found rollover in speech intelligibility measurements made in quiet for 35 out of 74 test ears with a hearing loss. Furthermore, we found rollover occurrence in quiet to be associated with poorer speech intelligibility in noise as measured with linear amplification. Here, we retested 16 participants with rollover with three amplitude-compression settings. Two were designed to prevent rollover by applying slow- or fast-acting compression with a 5:1 compression ratio around the "sweet spot," that is, the area in an individual performance-intensity function with high intelligibility and listening comfort. The third, reference setting used gains and compression ratios prescribed by the "National Acoustic Laboratories Non-Linear 1" rule. Speech intelligibility was assessed in quiet and in noise. Pairwise preference judgments were also collected. For speech levels of 70 dB SPL and above, slow-acting sweet-spot compression gave better intelligibility in quiet and noise than the reference setting. Additionally, the participants clearly preferred slow-acting sweet-spot compression over the other settings. At lower levels, the three settings gave comparable speech intelligibility, and the participants preferred the reference setting over both sweet-spot settings. Overall, these results suggest that, for listeners with rollover, slow-acting sweet-spot compression is beneficial at 70 dB SPL and above, while at lower levels clinically established gain targets are more suited.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10771052 | PMC |
http://dx.doi.org/10.1177/23312165231224597 | DOI Listing |
Acta Otorhinolaryngol Ital
December 2024
Audiology and Phoniatrics Service, ENT Department, University of Modena e Reggio Emilia, Modena, Italy.
Objectives: The SARS-CoV-2 pandemic required the use of personal protective equipment (PPE) in medical and social contexts to reduce exposure and prevent pathogen transmission. This study aims to analyse possible changes in voice and speech parameters with and without PPE.
Methods: Speech samples using different types of PPE were obtained.
Lang Speech
January 2025
Department of Educational Psychology, Leadership, & Counseling, Texas Tech University, USA.
Adapting one's speaking style is particularly crucial as children start interacting with diverse conversational partners in various communication contexts. The study investigated the capacity of preschool children aged 3-5 years ( = 28) to modify their speaking styles in response to background noise, referred to as noise-adapted speech, and when talking to an interlocutor who pretended to have hearing loss, referred to as clear speech. We examined how two modified speaking styles differed across the age range.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
January 2025
Department of Otolaryngology, China-Japan Friendship Hospital, Beijing, China.
Objectives: This study examined the relationships between electrophysiological measures of the electrically evoked auditory brainstem response (EABR) with speech perception measured in quiet after cochlear implantation (CI) to identify the ability of EABR to predict postoperative CI outcomes.
Methods: Thirty-four patients with congenital prelingual hearing loss, implanted with the same manufacturer's CI, were recruited. In each participant, the EABR was evoked at apical, middle, and basal electrode locations.
Background: Cochlear implantation is an effective method of auditory rehabilitation. Nevertheless, the results show individual variations depending on several factors.
Aim: To evaluate cochlear implantation results based on the APCEI profile (Acceptance, Perception, Comprehension, Oral Expression and Intelligibility) and audiometric results.
Sci Rep
December 2024
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!