The benefits of combining a cochlear implant (CI) and a hearing aid (HA) in opposite ears on speech perception were examined in 15 adult unilateral CI recipients who regularly use a contralateral HA. A within-subjects design was carried out to assess speech intelligibility testing, listening effort ratings, and a sound quality questionnaire for the conditions CI alone, CIHA together, and HA alone when applicable. The primary outcome of bimodal benefit, defined as the difference between CIHA and CI, was statistically significant for speech intelligibility in quiet as well as for intelligibility in noise across tested spatial conditions. A reduction in effort on top of intelligibility at the highest tested signal-to-noise ratio was found. Moreover, the bimodal listening situation was rated to sound more voluminous, less tinny, and less unpleasant than CI alone. Listening effort and sound quality emerged as feasible and relevant measures to demonstrate bimodal benefit across a clinically representative range of bimodal users. These extended dimensions of speech perception can shed more light on the array of benefits provided by complementing a CI with a contralateral HA.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5604840 | PMC |
http://dx.doi.org/10.1177/2331216517727900 | DOI Listing |
J Speech Lang Hear Res
January 2025
School of Humanities, Shenzhen University, China.
Purpose: This study investigated the influence of vowel quality on loudness perception and stress judgment in Mongolian, an agglutinative language with free word stress. We aimed to explore the effects of intrinsic vowel features, presentation order, and intensity conditions on loudness perception and stress assignment.
Method: Eight Mongolian short vowel phonemes (/ɐ/, /ə/, /i/, /ɪ/, /ɔ/, /o/, /ʊ/, and /u/) were recorded by a native Mongolian speaker of the Urad subdialect (the Chahar dialect group) in Inner Mongolia.
J Speech Lang Hear Res
January 2025
Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China.
Purpose: Neurotypical individuals show a robust "global precedence effect (GPE)" when processing hierarchically structured visual information. However, the auditory domain remains understudied. The current research serves to fill the knowledge gap on auditory global-local processing across the broader autism phenotype under the tonal language background.
View Article and Find Full Text PDFCodas
January 2025
Departamento de Saúde Interdisciplinaridade e Reabilitação, Faculdade de Ciências Médicas, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.
Purpose: To verify possible correlations between fo and voice satisfaction among Brazilian transgender people.
Methods: An observational, cross-sectional quantitative study was conducted with the Trans Woman Voice Questionnaire (TWVQ), voice recording (sustained vowel and automatic speech) and extraction of seven acoustic measurements related to fo position and variability in transgender people. Participants were divided into two groups according to gender.
JASA Express Lett
January 2025
Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington 98103, USA.
Pitch perception affects children's ability to perceive speech, appreciate music, and learn in noisy environments, such as their classrooms. Here, we investigated pitch perception for pure tones as well as resolved and unresolved complex tones with a fundamental frequency of 400 Hz in 8- to 11-year-old children and adults. Pitch perception in children was better for resolved relative to unresolved complex tones, consistent with adults.
View Article and Find Full Text PDFSci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!