Technology options for children with limited hearing unilaterally that improve the signal-to-noise ratio are expected to improve speech recognition and also reduce listening effort in challenging listening situations, although previous studies have not confirmed this. Employing behavioral and subjective indices of listening effort, this study aimed to evaluate the effects of two intervention options, remote microphone system (RMS) and contralateral routing of signal (CROS) system, in school-aged children with limited hearing unilaterally. Nineteen children (aged 7-12 years) with limited hearing unilaterally completed a digit triplet recognition task in three loudspeaker conditions: midline, monaural direct, and monaural indirect with three intervention options: unaided, RMS, and CROS system. Verbal response times were interpreted as a behavioral measure of listening effort. Participants provided subjective ratings immediately following behavioral measures. The RMS significantly improved digit triplet recognition across loudspeaker conditions and reduced verbal response times in the midline and indirect conditions. The CROS system improved speech recognition and listening effort only in the indirect condition. Subjective ratings analyses revealed that significantly more participants indicated that the remote microphone made it easier for them to listen and to stay motivated. Behavioral and subjective indices of listening effort indicated that an RMS provided the most consistent benefit for speech recognition and listening effort for children with limited unilateral hearing. RMSs could therefore be a beneficial technology option in classrooms for children with limited hearing unilaterally.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7903353 | PMC |
http://dx.doi.org/10.1177/2331216520984700 | DOI Listing |
Proc Natl Acad Sci U S A
January 2025
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15213.
The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues.
View Article and Find Full Text PDFJ Neurosci Methods
December 2024
Politecnico di Milano, Piazza Leonardo da Vinci, 32, Milan, 20133, Italy. Electronic address:
Background: Acoustic challenges impose demands on cognitive resources, known as listening effort (LE), which can substantially influence speech perception and communication. Standardized assessment protocols for monitoring LE are lacking, hindering the development of adaptive hearing assistive technology.
New Method: We employed an adaptive protocol, including a speech-in-noise test and personalized definition of task demand, to assess LE and its physiological correlates.
Sci Rep
December 2024
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.
View Article and Find Full Text PDFEar Hear
December 2024
Laboratorio de Audición Computacional y Piscoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain.
Objectives: We compared sound quality and performance for a conventional cochlear-implant (CI) audio processing strategy based on short-time fast-Fourier transform (Crystalis) and an experimental strategy based on spectral feature extraction (SFE). In the latter, the more salient spectral features (acoustic events) were extracted and mapped into the CI stimulation electrodes. We hypothesized that (1) SFE would be superior to Crystalis because it can encode acoustic spectral features without the constraints imposed by the short-time fast-Fourier transform bin width, and (2) the potential benefit of SFE would be greater for CI users who have less neural cross-channel interactions.
View Article and Find Full Text PDFEur Arch Otorhinolaryngol
December 2024
Department of Audiology, All India Institute of Speech and Hearing, Mysuru, Karnataka, 570006, India.
Purpose: To compare the listening effort using objective test (dual-task paradigm), parents report using abbreviated version of the Speech, Spatial and Quality questionnaire (SSQ-P10) and Teachers' Evaluation of Aural/Oral Performance of Children and Ease of Listening (TEACH), working memory and attention span between children using cochlear implants (CI) and age-matched peers with normal hearing sensitivity, and assess the relationship between listening effort and real-life benefit in children using CI.
Method: Group I included 25 children with normal hearing sensitivity. Group II included 25 children with bimodal cochlear implantation with bilateral severe to profound hearing loss.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!