Associations between speech recognition at high levels, the middle ear muscle reflex and noise exposure in individuals with normal audiograms.

Hear Res

Heuser Hearing Research Center, 117 E Kentucky St., Louisville, KY, 40203, USA; Department of Otolaryngology and Communicative Disorders, University of Louisville, 529 S. Jackson Street, Third Floor, Louisville, KY, 40202, USA; Department of Psychological and Brain Sciences, University of Louisville, 317 Life Sciences Building, Louisville, KY, 40292, USA. Electronic address:

Published: July 2020

It has been hypothesized that noise-induced cochlear synaptopathy in humans may result in functional deficits such as a weakened middle ear muscle reflex (MEMR) and degraded speech perception in complex environments. Although relationships between noise-induced synaptic loss and the MEMR have been demonstrated in animals, effects of noise exposure on the MEMR have not been observed in humans. The hypothesized relationship between noise exposure and speech perception has also been difficult to demonstrate conclusively. Given that the MEMR is engaged at high sound levels, relationships between speech recognition in complex listening environments and noise exposure might be more evident at high speech presentation levels. In this exploratory study with 41 audiometrically normal listeners, a combination of behavioral and physiologic measures thought to be sensitive to synaptopathy were used to determine potential links with speech recognition at high presentation levels. We found decreasing speech recognition as a function of presentation level (from 74 to 104 dBA), which was associated with reduced MEMR magnitude. We also found that reduced MEMR magnitude was associated with higher estimated lifetime noise exposure. Together, these results suggest that the MEMR may be sensitive to noise-induced synaptopathy in humans, and this may underlie functional speech recognition deficits at high sound levels.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.heares.2020.107982DOI Listing

Publication Analysis

Top Keywords

speech recognition
20
noise exposure
20
recognition high
8
middle ear
8
ear muscle
8
muscle reflex
8
synaptopathy humans
8
speech perception
8
exposure memr
8
high sound
8

Similar Publications

Objective: Measuring listening effort using pupillometry is challenging in cochlear implant (CI) users. We assess three validated speech tests (Matrix, LIST, and DIN) to identify the optimal speech material for measuring peak-pupil-dilation (PPD) in CI users as a function of signal-to-noise ratio (SNR).

Design: Speech tests were administered in quiet and two noisy conditions, namely at the speech recognition threshold (0 dB re SRT), i.

View Article and Find Full Text PDF

Tibetan-Chinese speech-to-speech translation based on discrete units.

Sci Rep

January 2025

Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China, Beijing, 100081, China.

Speech-to-speech translation (S2ST) has evolved from cascade systems which integrate Automatic Speech Recognition (ASR), Machine Translation (MT), and Text-to-Speech (TTS), to end-to-end models. This evolution has been driven by advancements in model performance and the expansion of cross-lingual speech datasets. Despite the paucity of research on Tibetan speech translation, this paper endeavors to tackle the challenge of Tibetan-to-Chinese direct speech-to-speech translation within the multi-task learning framework, employing self-supervised learning (SSL) and sequence-to-sequence model training.

View Article and Find Full Text PDF

Some Challenging Questions About Outcomes in Children With Cochlear Implants.

Perspect ASHA Spec Interest Groups

December 2024

DeVault Otologic Research Laboratory, Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis.

Purpose: Cochlear implants (CIs) have improved the quality of life for many children with severe-to-profound sensorineural hearing loss. Despite the reported CI benefits of improved speech recognition, speech intelligibility, and spoken language processing, large individual differences in speech and language outcomes are still consistently reported in the literature. The enormous variability in CI outcomes has made it challenging to predict which children may be at high risk for limited benefits and how potential risk factors can be improved with interventions.

View Article and Find Full Text PDF

Introduction: It is still under debate whether and how semantic content will modulate the emotional prosody perception in children with autism spectrum disorder (ASD). The current study aimed to investigate the issue using two experiments by systematically manipulating semantic information in Chinese disyllabic words.

Method: The present study explored the potential modulation of semantic content complexity on emotional prosody perception in Mandarin-speaking children with ASD.

View Article and Find Full Text PDF

Artificial intelligence (AI) scribe applications in the healthcare community are in the early adoption phase and offer unprecedented efficiency for medical documentation. They typically use an application programming interface with a large language model (LLM), for example, generative pretrained transformer 4. They use automatic speech recognition on the physician-patient interaction, generating a full medical note for the encounter, together with a draft follow-up e-mail for the patient and, often, recommendations, all within seconds or minutes.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!