Purpose: Speech recognition in noise is challenging for listeners and appears to require support from executive functions to focus attention on rapidly unfolding target speech, track misunderstanding, and sustain attention. The current study was designed to test the hypothesis that lower executive function abilities explain poorer speech recognition in noise, including among older participants with hearing loss who often exhibit diminished speech recognition in noise and cognitive abilities.
Method: A cross-sectional sample of 400 younger-to-older adult participants (19 to < 90 years of age) from the community-based Medical University of South CarolinaLongitudinal Cohort Study of Age-related Hearing Loss were administered tasks with executive control demands to assess individual variability in a card-sorting measure of set-shifting/performance monitoring, a dichotic listening measure of selective attention/working memory, sustained attention, and processing speed. Key word recognition in the high- and low-context speech perception-in-noise (SPIN) tests provided measures of speech recognition in noise. The SPIN scores were adjusted for audibility using the Articulation Index to characterize the impact of varied hearing sensitivity unrelated to reduced audibility on cognitive and speech recognition associations.
Results: Set-shifting, dichotic listening, and processing speed each explained unique and significant variance in audibility-adjusted, low-context SPIN scores (s < .001), including after controlling for age, pure-tone threshold average (PTA), sex, and education level. The dichotic listening and processing speed effect sizes were significantly diminished when controlling for PTA, indicating that participants with poorer hearing sensitivity were also likely to have lower executive function and lower audibility-adjusted speech recognition.
Conclusions: Poor set-shifting/performance monitoring, slow processing speed, and poor selective attention/working memory appeared to partially explain difficulties with speech recognition in noise after accounting for audibility. These results are consistent with the premise that distinct executive functions support speech recognition in noise.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11666980 | PMC |
http://dx.doi.org/10.1044/2024_JSLHR-24-00333 | DOI Listing |
Polymers (Basel)
December 2024
Chongqing Academy of Metrology and Quality Inspection, Chongqing 401120, China.
Dynamic hydrogels have attracted considerable attention in the application of flexible electronics, as they possess injectable and self-healing abilities. However, it is still a challenge to combine high conductivity and antibacterial properties into dynamic hydrogels. In this work, we fabricated a type of dynamic hydrogel based on acylhydrazone bonds between thermo-responsive copolymer and silver nanoparticles (AgNPs) functionalized with hydrazide groups.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
The extraction and analysis of pitch underpin speech and music recognition, sound segregation, and other auditory tasks. Perceptually, pitch can be represented as a helix composed of two factors: height monotonically aligns with frequency, while chroma cyclically repeats at doubled frequencies. Although the early perceptual and neurophysiological mechanisms for extracting pitch from acoustic signals have been extensively investigated, the equally essential subsequent stages that bridge to high-level auditory cognition remain less well understood.
View Article and Find Full Text PDFeNeuro
January 2025
Hearing Technology @ WAVES, Department of Information Technology, Ghent University, Technologiepark 216, 9052 Zwijnaarde, Belgium
Speech intelligibility declines with age and sensorineural hearing damage (SNHL). However, it remains unclear whether cochlear synaptopathy (CS), a recently discovered form of SNHL, significantly contributes to this issue. CS refers to damaged auditory-nerve synapses that innervate the inner hair cells and there is currently no go-to diagnostic test available.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Department of Communication Sciences and Disorders, Baylor University, Waco, TX.
Purpose: The aim of this study was to measure the effects of frequency spacing (i.e., F2 minus F1) on spectral integration for vowel perception in simulated bilateral electric-acoustic stimulation (BiEAS), electric-acoustic stimulation (EAS), and bimodal hearing.
View Article and Find Full Text PDFEar Hear
December 2024
Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!