Cochlear Implant Facilitates the Use of Talker Sex and Spatial Cues to Segregate Competing Speech in Unilaterally Deaf Listeners.

Ear Hear

Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California, USA.

Published: December 2022

AI Article Synopsis

  • This study examines how cochlear implants (CIs) and acoustic hearing in the nonimplanted ear affect the ability to segregate competing speech using talker sex and spatial cues.
  • It involved 32 participants (16 normal-hearing and 16 unilateral CI users), testing their speech recognition in different scenarios with male and female talkers.
  • Results showed that normal-hearing individuals performed significantly better in separating competing speech compared to CI users, particularly benefiting more from a combination of talker sex and spatial cues.

Article Abstract

Objectives: Talker sex and spatial cues can facilitate segregation of competing speech. However, the spectrotemporal degradation associated with cochlear implants (CIs) can limit the benefit of talker sex and spatial cues. Acoustic hearing in the nonimplanted ear can improve access to talker sex cues in CI users. However, it's unclear whether the CI can improve segregation of competing speech when maskers are symmetrically placed around the target (i.e., when spatial cues are available), compared with acoustic hearing alone. The aim of this study was to investigate whether a CI can improve segregation of competing speech by individuals with unilateral hearing loss.

Design: Speech recognition thresholds (SRTs) for competing speech were measured in 16 normal-hearing (NH) adults and 16 unilaterally deaf CI users. All participants were native speakers of Mandarin Chinese. CI users were divided into two groups according to thresholds in the nonimplanted ear: (1) single-sided deaf (SSD); pure-tone thresholds <25 dB HL at all audiometric frequencies, and (2) Asymmetric hearing loss (AHL; one or more thresholds > 25 dB HL). SRTs were measured for target sentences produced by a male talker in the presence of two masker talkers (different male or female talkers). The target sentence was always presented via loudspeaker directly in front of the listener (0°), and the maskers were either colocated with the target (0°) or spatially separated from the target at ±90°. Three segregation cue conditions were tested to measure masking release (MR) relative to the baseline condition: (1) Talker sex, (2) Spatial, and (3) Talker sex + Spatial. For CI users, SRTs were measured with the CI on or off.

Results: Binaural MR was significantly better for the NH group than for the AHL or SSD groups ( P < 0.001 in all cases). For the NH group, mean MR was largest with the Talker sex + spatial cues (18.8 dB) and smallest for the Talker sex cues (10.7 dB). In contrast, mean MR for the SSD group was largest with the Talker sex + spatial cues (14.7 dB), and smallest with the Spatial cues (4.8 dB). For the AHL group, mean MR was largest with the Talker sex + spatial cues (7.8 dB) and smallest with the Talker sex (4.8 dB) and the Spatial cues (4.8 dB). MR was significantly better with the CI on than off for both the AHL ( P = 0.014) and SSD groups ( P < 0.001). Across all unilaterally deaf CI users, monaural (acoustic ear alone) and binaural MR were significantly correlated with unaided pure-tone average thresholds in the nonimplanted ear for the Talker sex and Talker sex + spatial conditions ( P < 0.001 in both cases) but not for the Spatial condition.

Conclusion: Although the CI benefitted unilaterally deaf listeners' segregation of competing speech, MR was much poorer than that observed in NH listeners. Different from previous findings with steady noise maskers, the CI benefit for segregation of competing speech from a different talker sex was greater in the SSD group than in the AHL group.

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000001254DOI Listing

Publication Analysis

Top Keywords

talker sex
56
sex spatial
40
spatial cues
36
competing speech
28
segregation competing
20
unilaterally deaf
16
talker
15
sex
14
spatial
13
nonimplanted ear
12

Similar Publications

Gender and language effects on the long-term average speech spectrum (LTASS) have been reported, but typically using recordings that were bandlimited and/or failed to accurately capture extended high frequencies (EHFs). Accurate characterization of the full-band LTASS is warranted given recent data on the contribution of EHFs to speech perception. The present study characterized the LTASS for high-fidelity, anechoic recordings of males and females producing Bamford-Kowal-Bench sentences, digits, and unscripted narratives.

View Article and Find Full Text PDF

Background: Public health measures implemented during the COVID-19 pandemic fundamentally altered the socioecological context in which children were developing.

Methods: Using Bronfenbrenner's socioecological theory, we investigate language acquisition among 2-year-old children (n = 4037) born during the pandemic. We focus on "late talkers", defined as children below the 10th percentile on the MacArthur-Bates Communicative Development Inventories-III.

View Article and Find Full Text PDF

Talker change detection by listeners varying in age and hearing loss.

J Acoust Soc Am

April 2024

Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA.

Despite a vast literature on how speech intelligibility is affected by hearing loss and advanced age, remarkably little is known about the perception of talker-related information in these populations. Here, we assessed the ability of listeners to detect whether a change in talker occurred while listening to and identifying sentence-length sequences of words. Participants were recruited in four groups that differed in their age (younger/older) and hearing status (normal/impaired).

View Article and Find Full Text PDF

Tonal language experience facilitates the use of spatial cues for segregating competing speech in bimodal cochlear implant listeners.

JASA Express Lett

March 2024

Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095,

Article Synopsis
  • English-speaking cochlear implant users rely on talker sex cues for speech segregation but struggle with spatial cues.
  • Mandarin-speaking cochlear implant users, however, show a stronger ability to utilize spatial cues, especially when using both hearing aids and cochlear implants (bimodal listening).
  • The study highlights the differences in speech recognition abilities between English and Mandarin CI users, particularly regarding their use of tonal language cues.
View Article and Find Full Text PDF

Objectives: This study examined the neural mechanisms by which remote microphone (RM) systems might lead to improved behavioral performance on listening-in-noise tasks in autistic and non-autistic youth.

Design: Cortical auditory evoked potentials (CAEPs) were recorded in autistic (n = 25) and non-autistic (n = 22) youth who were matched at the group level on chronological age ( M = 14.21 ± 3.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!