Objective: The aims of this study were to: 1) quantify the amount of change in signal-to-noise ratio (SNR) as a result of compression and noise reduction (NR) processing in devices from three hearing aid (HA) manufacturers and 2) use the SNR changes to predict changes in speech perception. We hypothesised that the SNR change would differ across processing type and manufacturer, and that improvements in SNR would relate to improvements in performance.

Design: SNR at the output of the HAs was quantified using a phase-inversion technique. A linear mixed model was used to determine whether changes in SNR across HA conditions were predictive of changes in aided speech perception in noise.

Study Sample: Two groups participated: 25 participants had normal-hearing and 25 participants had mild to moderately severe sensorineural hearing loss.

Results: The HAs programmed for both groups changed the SNR by a small, but statistically significant amount. Significant interactions in SNR changes were observed between HA devices and processing types. However, the change in SNR was not predictive of changes in speech perception.

Conclusion: Although observed significant changes in SNR resulting from compression and NR did not convert to changes in speech perception, these algorithms may serve other purposes.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6076442PMC
http://dx.doi.org/10.1080/14992027.2017.1305128DOI Listing

Publication Analysis

Top Keywords

speech perception
16
changes speech
12
snr
10
signal-to-noise ratio
8
changes
8
snr changes
8
changes snr
8
predictive changes
8
speech
5
output signal-to-noise
4

Similar Publications

Comprehension of acoustically degraded emotional prosody in Alzheimer's disease and primary progressive aphasia.

Sci Rep

December 2024

Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK.

Previous research suggests that emotional prosody perception is impaired in neurodegenerative diseases like Alzheimer's disease (AD) and primary progressive aphasia (PPA). However, no previous research has investigated emotional prosody perception in these diseases under non-ideal listening conditions. We recruited 18 patients with AD, and 31 with PPA (nine logopenic (lvPPA); 11 nonfluent/agrammatic (nfvPPA) and 11 semantic (svPPA)), together with 24 healthy age-matched individuals.

View Article and Find Full Text PDF

Background: Theories highlight the important role of chronic stress in remodeling HPA-axis responsivity under stress. The Perceived Stress Scale (PSS) is one of the most widely used measures of enduring stress perceptions, and no previous studies have evaluated whether greater perceptions of stress on the PSS are associated with cortisol hypo- or hyperactivity responses to the Trier Social Stress Test (TSST).

Objective: To examine if high perceived stress over the past month, as measured by the PSS, alters cortisol and subjective acute stress reactivity to the TSST in healthy young adults.

View Article and Find Full Text PDF

Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.

View Article and Find Full Text PDF

How Does Deep Neural Network-Based Noise Reduction in Hearing Aids Impact Cochlear Implant Candidacy?

Audiol Res

December 2024

Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.

Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.

Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.

View Article and Find Full Text PDF

Background/objectives: Understanding speech in background noise is a challenging task for listeners with normal hearing and even more so for individuals with hearing impairments. The primary objective of this study was to develop Romanian speech material in noise to assess speech perception in diverse auditory populations, including individuals with normal hearing and those with various types of hearing loss. The goal was to create a versatile tool that can be used in different configurations and expanded for future studies examining auditory performance across various populations and rehabilitation methods.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!