Introduction: Cochlear implants (CI) programming is based on both the measurement of the minimum levels required to stimulate the auditory nerve and the maximum levels to generate loud, yet comfortable loudness. Seeking for guidance in the adequacy of this programming, the cortical auditory evoked potentials (CAEP) have been gaining space as an important tool in the evaluation of CI users, providing information on the central auditory system.

Objective: To evaluate the influence of mishandling of electrical stimulation levels on speech processor programming on hearing thresholds, speech recognition and cortical auditory evoked potential in adult CI users.

Material And Methods: This is a prospective cross-sectional study, with a sample of adult unilateral CI users of both sexes, aged at least 18 years, post-lingual deafness, with minimum experience of 12 months of device use. Selected subjects should have average free field hearing thresholds with cochlear implant equal to or better than 34 dBHL and monosyllable recognition different from 0%. Individuals who could not collaborate with the procedures or who had no CAEP recordings were excluded. Participants were routinely programmed, and the map was named MO (optimized original map). Then three experimentally wrong maps were made: optimized original map with 10 current units below the maximum comfort level (C), named MC- (map minus C); optimized original map with minus 10 current units at minimum threshold level (T), named MT- (map minus T) and optimized original map with 10 current units above minimum level (T), named MT + (map plus T). In all programs, participants underwent free-field auditory thresholds from 250Hz to 6000Hz, recorded sentences and monosyllabic recognition tests presented at 65dB SPL in quiet and in noise, and free field CAEP evaluation. All tests were performed in an acoustically treated booth, in a randomized order of map presentation. Data were compared by Wilcoxon test.

Results: Thirty individuals were selected and signed an informed consent form. The MC- map provided worsening of all free field thresholds, quiet and noise speech recognition, and P1 wave latency delay with significant difference from the results with the MO map. The MT- map worsened the hearing thresholds and statistically significantly reduced the P2 wave latency; MT+ map improved free field thresholds except 6000Hz, worsening speech recognition, without statistical significance.

Conclusions: The results suggest that maximum levels below the optimal thresholds lead to worse cochlear implant performance in both hearing thresholds and speech recognition tests in quiet and noise, increasing CAEP component P1 latency. On the other hand, the manipulation of minimum threshold levels showed alteration in audibility without significant impact on speech recognition.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2021.108206DOI Listing

Publication Analysis

Top Keywords

speech recognition
24
hearing thresholds
16
free field
16
optimized original
16
original map
16
cochlear implant
12
map
12
current units
12
level named
12
map minus
12

Similar Publications

Objective: Measuring listening effort using pupillometry is challenging in cochlear implant (CI) users. We assess three validated speech tests (Matrix, LIST, and DIN) to identify the optimal speech material for measuring peak-pupil-dilation (PPD) in CI users as a function of signal-to-noise ratio (SNR).

Design: Speech tests were administered in quiet and two noisy conditions, namely at the speech recognition threshold (0 dB re SRT), i.

View Article and Find Full Text PDF

Tibetan-Chinese speech-to-speech translation based on discrete units.

Sci Rep

January 2025

Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China, Beijing, 100081, China.

Speech-to-speech translation (S2ST) has evolved from cascade systems which integrate Automatic Speech Recognition (ASR), Machine Translation (MT), and Text-to-Speech (TTS), to end-to-end models. This evolution has been driven by advancements in model performance and the expansion of cross-lingual speech datasets. Despite the paucity of research on Tibetan speech translation, this paper endeavors to tackle the challenge of Tibetan-to-Chinese direct speech-to-speech translation within the multi-task learning framework, employing self-supervised learning (SSL) and sequence-to-sequence model training.

View Article and Find Full Text PDF

Some Challenging Questions About Outcomes in Children With Cochlear Implants.

Perspect ASHA Spec Interest Groups

December 2024

DeVault Otologic Research Laboratory, Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis.

Purpose: Cochlear implants (CIs) have improved the quality of life for many children with severe-to-profound sensorineural hearing loss. Despite the reported CI benefits of improved speech recognition, speech intelligibility, and spoken language processing, large individual differences in speech and language outcomes are still consistently reported in the literature. The enormous variability in CI outcomes has made it challenging to predict which children may be at high risk for limited benefits and how potential risk factors can be improved with interventions.

View Article and Find Full Text PDF

Introduction: It is still under debate whether and how semantic content will modulate the emotional prosody perception in children with autism spectrum disorder (ASD). The current study aimed to investigate the issue using two experiments by systematically manipulating semantic information in Chinese disyllabic words.

Method: The present study explored the potential modulation of semantic content complexity on emotional prosody perception in Mandarin-speaking children with ASD.

View Article and Find Full Text PDF

Artificial intelligence (AI) scribe applications in the healthcare community are in the early adoption phase and offer unprecedented efficiency for medical documentation. They typically use an application programming interface with a large language model (LLM), for example, generative pretrained transformer 4. They use automatic speech recognition on the physician-patient interaction, generating a full medical note for the encounter, together with a draft follow-up e-mail for the patient and, often, recommendations, all within seconds or minutes.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!