Purpose: For patients with single-sided deafness (SSD), choosing between bone conduction devices (BCDs) and contralateral routing of signal hearing aids (CROS) is challenging due to mixed evidence on their benefits. The lack of clear guidelines complicates clinical decision making. This study explores whether realistic spatial listening measures can reveal a clinically valid benefit and if the optimal choice varies among patients. By assessing listening effort through objective and subjective measures, this research evaluates the efficacy of BCD and CROS, seeking to provide evidence-based recommendation anchored in the effectiveness of these devices in real-world scenarios.
Method: Thirteen participants with SSD performed the Hearing-in-Noise Test while using a BCD, CROS hearing aids, and no hearing device (unaided). Subjective listening effort was assessed using the National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire after each testing block. An objective measurement of listening effort was obtained by measuring the peak pupil dilation (PPD) during the task using eye tracking glasses.
Results: No significant difference of either PPD or NASA-TLX scores was observed between the three device conditions (BCD, CROS, and unaided). However, a trend is noted toward reduced PPD in the BCD and CROS conditions. The lack of significance in pupillometry results does not stem from technical issues, as the study's findings confirm its effectiveness in measuring task difficulty, and validate its use for assessing listening effort.
Conclusions: Although the results from the present study cannot significantly differentiate the hearing devices, we observe a trend that points toward reduced listening effort when using hearing devices. Future investigations should aim to optimize metrics of listening effort, perhaps making them clinically useful on an individual level.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/2024_AJA-24-00073 | DOI Listing |
Proc Natl Acad Sci U S A
January 2025
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15213.
The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues.
View Article and Find Full Text PDFJ Neurosci Methods
December 2024
Politecnico di Milano, Piazza Leonardo da Vinci, 32, Milan, 20133, Italy. Electronic address:
Background: Acoustic challenges impose demands on cognitive resources, known as listening effort (LE), which can substantially influence speech perception and communication. Standardized assessment protocols for monitoring LE are lacking, hindering the development of adaptive hearing assistive technology.
New Method: We employed an adaptive protocol, including a speech-in-noise test and personalized definition of task demand, to assess LE and its physiological correlates.
Sci Rep
December 2024
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.
View Article and Find Full Text PDFEar Hear
December 2024
Laboratorio de Audición Computacional y Piscoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.
Objectives: We compared sound quality and performance for a conventional cochlear-implant (CI) audio processing strategy based on short-time fast-Fourier transform (Crystalis) and an experimental strategy based on spectral feature extraction (SFE). In the latter, the more salient spectral features (acoustic events) were extracted and mapped into the CI stimulation electrodes. We hypothesized that (1) SFE would be superior to Crystalis because it can encode acoustic spectral features without the constraints imposed by the short-time fast-Fourier transform bin width, and (2) the potential benefit of SFE would be greater for CI users who have less neural cross-channel interactions.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
December 2024
Department of Audiology, All India Institute of Speech and Hearing, Mysuru, Karnataka, 570006, India.
Purpose: To compare the listening effort using objective test (dual-task paradigm), parents report using abbreviated version of the Speech, Spatial and Quality questionnaire (SSQ-P10) and Teachers' Evaluation of Aural/Oral Performance of Children and Ease of Listening (TEACH), working memory and attention span between children using cochlear implants (CI) and age-matched peers with normal hearing sensitivity, and assess the relationship between listening effort and real-life benefit in children using CI.
Method: Group I included 25 children with normal hearing sensitivity. Group II included 25 children with bimodal cochlear implantation with bilateral severe to profound hearing loss.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!