Objectives: Model-based hearing aid development considers the assessment of speech recognition using a master hearing aid (MHA). It is known that aided speech recognition in noise is related to cognitive factors such as working memory capacity (WMC). This relationship might be mediated by hearing aid experience (HAE). The aim of this study was to examine the relationship of WMC and speech recognition with a MHA for listeners with different HAE.
Design: Using the MHA, unaided and aided 80% speech recognition thresholds in noise were determined. Individual WMC capacity was assed using the Verbal Learning and Memory Test (VLMT) and the Reading Span Test (RST).
Study Sample: Forty-nine hearing aid users with mild to moderate sensorineural hearing loss divided into three groups differing in HAE.
Results: Whereas unaided speech recognition did not show a significant relationship with WMC, a significant correlation could be observed between WMC and aided speech recognition. However, this only applied to listeners with HAE of up to approximately three years, and a consistent weakening of the correlation could be observed with more experience.
Conclusions: Speech recognition scores obtained in acute experiments with an MHA are less influenced by individual cognitive capacity when experienced HA users are taken into account.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/14992027.2017.1319079 | DOI Listing |
eNeuro
January 2025
Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 216, 9052 Zwijnaarde, Belgium
Speech intelligibility declines with age and sensorineural hearing damage (SNHL). However, it remains unclear whether cochlear synaptopathy (CS), a recently discovered form of SNHL, significantly contributes to this issue. CS refers to damaged auditory-nerve synapses that innervate the inner hair cells and there is currently no go-to diagnostic test available.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Department of Communication Sciences and Disorders, Baylor University, Waco, TX.
Purpose: The aim of this study was to measure the effects of frequency spacing (i.e., F2 minus F1) on spectral integration for vowel perception in simulated bilateral electric-acoustic stimulation (BiEAS), electric-acoustic stimulation (EAS), and bimodal hearing.
View Article and Find Full Text PDFEar Hear
December 2024
Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.
View Article and Find Full Text PDFJAMA Otolaryngol Head Neck Surg
January 2025
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.
Importance: Cochlear implants enable improvements in speech perception, but music perception outcomes remain variable. Image-guided cochlear implant programming has emerged as a potential programming strategy for increasing the quality of spectral information delivered through the cochlear implant to improve outcomes.
Objectives: To perform 2 experiments, the first of which modeled the variance in music perception scores as a function of electrode positioning factors, and the second of which evaluated image-guided cochlear implant programming as a strategy to improve music perception with a cochlear implant.
Alzheimers Dement
December 2024
Cognitive Neuroscience Center, University of San Andrés, Victoria, Buenos Aires, Argentina
Background: Digital health research on Alzheimer’s disease (AD) points to automated speech and language analysis (ASLA) as a globally scalable approach for diagnosis and monitoring. However, most studies target uninterpretable features in Anglophone samples, casting doubts on the approach’s clinical utility and cross‐linguistic validity. The present study was designed to tackle both issues.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!