Speech recognition scores were analyzed in 34 carriers of a DFNA5 mutation. Cross-sectional linear regression analysis (last visit, maximum recognition score in %Correct on age or PTA1,2,4 kHz) established onset age (score 90%) at 16 years and onset PTA1,2,4 kHz level (score 90%) at 41 dB hearing level. The deterioration rate was 0.7%/y in the plot of maximum score against age, whereas the deterioration gradient was 0.4%/dB in the plot of maximum score against PTA1,2,4 kHz. Given the previously demonstrated rapid progression of hearing impairment, speech recognition was relatively good: at age 70, the score was still >50%.

Download full-text PDF

Source
http://dx.doi.org/10.1177/000348940211100712DOI Listing

Publication Analysis

Top Keywords

speech recognition
12
pta124 khz
12
age score
8
score 90%
8
plot maximum
8
maximum score
8
score
6
delineation dfna5
4
dfna5 phenotype
4
phenotype speech
4

Similar Publications

Objectives: This study examined the relationships between electrophysiological measures of the electrically evoked auditory brainstem response (EABR) with speech perception measured in quiet after cochlear implantation (CI) to identify the ability of EABR to predict postoperative CI outcomes.

Methods: Thirty-four patients with congenital prelingual hearing loss, implanted with the same manufacturer's CI, were recruited. In each participant, the EABR was evoked at apical, middle, and basal electrode locations.

View Article and Find Full Text PDF

Background: Continuous speech analysis is considered as an efficient and convenient approach for early detection of Alzheimer's Disease (AD). However, the traditional approach generally requires human transcribers to transcribe audio data accurately. This study applied automatic speech recognition (ASR) in conjunction with natural language processing (NLP) techniques to automatically extract linguistic features from Chinese speech data.

View Article and Find Full Text PDF

Background: There is growing evidence that discourse (i.e., connected speech) could serve as a cost-effective and ecologically valid means of identifying individuals with prodromal Alzheimer's disease.

View Article and Find Full Text PDF

Emerging Wearable Acoustic Sensing Technologies.

Adv Sci (Weinh)

January 2025

Key Laboratory of Optoelectronic Technology & Systems of Ministry of Education, International R&D Center of Micro-Nano Systems and New Materials Technology, Chongqing University, Chongqing, 400044, China.

Sound signals not only serve as the primary communication medium but also find application in fields such as medical diagnosis and fault detection. With public healthcare resources increasingly under pressure, and challenges faced by disabled individuals on a daily basis, solutions that facilitate low-cost private healthcare hold considerable promise. Acoustic methods have been widely studied because of their lower technical complexity compared to other medical solutions, as well as the high safety threshold of the human body to acoustic energy.

View Article and Find Full Text PDF

Stress classification with in-ear heartbeat sounds.

Comput Biol Med

December 2024

École de technologie supérieure, 1100 Notre-Dame St W, Montreal, H3C 1K3, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), 527 Rue Sherbrooke O #8, Montréal, QC H3A 1E3, Canada. Electronic address:

Background: Although stress plays a key role in tinnitus and decreased sound tolerance, conventional hearing devices used to manage these conditions are not currently capable of monitoring the wearer's stress level. The aim of this study was to assess the feasibility of stress monitoring with an in-ear device.

Method: In-ear heartbeat sounds and clinical-grade electrocardiography (ECG) signals were simultaneously recorded while 30 healthy young adults underwent a stress protocol.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!