Objective: Pediatric cochlear implant (CI) recipients with unilateral hearing loss (UHL) and functional low-frequency acoustic hearing in the implanted ear could be fit with an electric-acoustic stimulation (EAS) device, which is the combination of acoustic and CI technologies in one device. Outcomes for this unique patient population are currently unknown. The present study assessed the speech recognition of pediatric EAS users with UHL.
Study Design: Retrospective review.
Setting: Tertiary academic referral center.
Patients: Pediatric CI recipients with functional acoustic hearing in the implanted ear (i.e., ≤ 80 dB HL) and a contralateral pure-tone average (0.5, 1, 2, and 4 kHz) ≤ 25 dB HL.
Main Outcome Measures: Speech recognition was assessed with the consonant-nucleus-consonant (CNC) test for the affected ear preoperatively and at 6 and 12 months postactivation. Masked speech recognition was assessed with the Bamford-Kowal-Bench speech-in-noise test in the bilateral condition for three spatial configurations: target from the front and masker colocated with the target or presented 90° toward the implanted or contralateral ear.
Results: Children experienced a significant improvement in CNC scores with EAS as compared to preoperative abilities with a hearing aid (F(2,7) = 10.0, p = 0.009). Preliminary masked sentence recognition data suggest a benefit in performance when the target was spatially separated from the masker, and a benefit with EAS as compared to an unaided listening condition.
Conclusions: Children with UHL and functional acoustic hearing in the implanted ear experience better speech recognition with EAS as compared to preoperative abilities or listening unaided.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/MAO.0000000000004460 | DOI Listing |
J Med Internet Res
March 2025
Westmead Applied Research Centre, Faculty of Medicine and Health, The University of Sydney, Westmead, Australia.
Background: Conversational artificial intelligence (AI) allows for engaging interactions, however, its acceptability, barriers, and enablers to support patients with atrial fibrillation (AF) are unknown.
Objective: This work stems from the Coordinating Health care with AI-supported Technology for patients with AF (CHAT-AF) trial and aims to explore patient perspectives on receiving support from a conversational AI support program.
Methods: Patients with AF recruited for a randomized controlled trial who received the intervention were approached for semistructured interviews using purposive sampling.
IEEE Trans Vis Comput Graph
March 2025
Trust in agents within Virtual Reality is becoming increasingly important, as they provide advice and influence people's decision-making. However, previous studies show that encountering speech recognition errors can reduce users' trust in agents. Such errors lead users to ignore the agent's advice and make suboptimal decisions.
View Article and Find Full Text PDFBrain Inj
March 2025
Interdisciplinary Health Sciences & Sociology, Oakland University, Rochester, Minnesota, USA.
Objective: To synthesize requirements and recommendations addressing sport-related concussion (SRC).
Design: Qualitative study.
Setting: Scholastic and non-scholastic athletic programs.
Sci Rep
March 2025
Basque Center on Cognition, Brain and Language, Paseo Mikeletegi 69, Donostia-San Sebastián, 20009, Spain.
Learning to read affects speech perception. For example, the ability of listeners to recognize consistently spelled words faster than inconsistently spelled words is a robust finding called the Orthographic Consistency Effect (OCE). Previous studies located the OCE at the rime level and focused on languages with opaque orthographies.
View Article and Find Full Text PDFCochlear Implants Int
March 2025
Department of Speech Language Pathology & Audiology, Towson University, Towson, MD, USA.
Objective: The objective of this study was to determine how the presentation of unprocessed speech, either ipsilaterally (to simulate electro-acoustic stimulation, EAS) or contralaterally (to simulate bimodal stimulation), alongside vocoder-processed speech affects the efficiency of spoken word processing.
Method: Gated word recognition was performed under four listening conditions: full-spectrum speech, vocoder-processed speech, electro-acoustic stimulation (EAS), and bimodal stimulation. In the EAS condition, low-frequency unprocessed speech and high-frequency vocoder-processed speech were presented to the same ear, while in the bimodal condition, full-spectrum speech was presented to one ear and vocoder-processed speech to the other.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!