Purpose: To assess the ability of older-adult hearing-impaired (OHI) listeners to identify verbal expressions of emotions, and to evaluate whether hearing-aid (HA) use improves identification performance in those listeners.
Methods: Twenty-nine OHI listeners, who were experienced bilateral-HA users, participated in the study. They listened to a 20-sentence-long speech passage rendered with six different emotional expressions ("happiness", "pleasant surprise", "sadness", "anger", "fear", and "neutral"). The task was to identify the emotion portrayed in each version of the passage. Listeners completed the task twice in random order, once unaided, and once wearing their own bilateral HAs. Seventeen young-adult normal-hearing (YNH) listeners were also tested unaided as controls.
Results: Most YNH listeners (89.2%) correctly identified emotions compared to just over half of the OHI listeners (58.7%). Within the OHI group, verbal emotion identification was significantly correlated with age, but not with audibility-related factors. The number of OHI listeners who were able to correctly identify the different emotions did not significantly change when HAs were worn (54.8%).
Conclusion: In line with previous investigations using shorter speech stimuli, there were clear age differences in the recognition of verbal emotions, with OHI listeners showing a significant reduction in unaided verbal-emotion identification performance that progressively declined with age across older adulthood. Rehabilitation through HAs did not provide compensation for the impaired ability to perceive emotions carried by speech sounds.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7648619 | PMC |
http://dx.doi.org/10.2147/CIA.S281469 | DOI Listing |
PLoS One
January 2025
Dept. of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany.
Music pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acoustical basis of mixing for hearing-impaired listeners remains poorly understood.
View Article and Find Full Text PDFJASA Express Lett
November 2024
Department of Medical Physics & Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, 26129, Germany.
This study assessed musical scene analysis (MSA) performance and subjective quality ratings of multi-track mixes as a function of spectral manipulations using the EQ-transform (% EQT). This transform exaggerates or reduces the spectral shape changes in a given track with respect to a relatively flat, smooth reference spectrum. Data from 30 younger normal hearing (yNH) and 23 older hearing-impaired (oHI) participants showed that MSA performance was robust to changes in % EQT.
View Article and Find Full Text PDFProg Neuropsychopharmacol Biol Psychiatry
December 2024
Department of Psychiatry, Gifu University Graduate School of Medicine, Gifu, Japan.
J Acoust Soc Am
November 2021
Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani Campus, Vidya Vihar, Pilani, Rajasthan 333031, India.
A difference in fundamental frequency (F0) between two vowels is an important segregation cue prior to identifying concurrent vowels. To understand the effects of this cue on identification due to age and hearing loss, Chintanpalli, Ahlstrom, and Dubno [(2016). J.
View Article and Find Full Text PDFOtol Neurotol
December 2021
Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina.
Objective: To examine audiologic outcomes and operative considerations for patients undergoing subtotal petrosectomy (STP) followed by implantable hearing restoration.
Study Design: Retrospective review.
Setting: Tertiary academic referral hospital.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!