Modeling speech intelligibility in quiet and noise in listeners with normal and impaired hearing.

J Acoust Soc Am

Department of Clinical and Experimental Audiology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands.

Published: March 2010

The speech intelligibility index (SII) is an often used calculation method for estimating the proportion of audible speech in noise. For speech reception thresholds (SRTs), measured in normally hearing listeners using various types of stationary noise, this model predicts a fairly constant speech proportion of about 0.33, necessary for Dutch sentence intelligibility. However, when the SII model is applied for SRTs in quiet, the estimated speech proportions are often higher, and show a larger inter-subject variability, than found for speech in noise near normal speech levels [65 dB sound pressure level (SPL)]. The present model attempts to alleviate this problem by including cochlear compression. It is based on a loudness model for normally hearing and hearing-impaired listeners of Moore and Glasberg [(2004). Hear. Res. 188, 70-88]. It estimates internal excitation levels for speech and noise and then calculates the proportion of speech above noise and threshold using similar spectral weighting as used in the SII. The present model and the standard SII were used to predict SII values in quiet and in stationary noise for normally hearing and hearing-impaired listeners. The present model predicted SIIs for three listener types (normal hearing, noise-induced, and age-induced hearing loss) with markedly less variability than the standard SII.

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.3291000DOI Listing

Publication Analysis

Top Keywords

speech noise
16
speech
9
speech intelligibility
8
intelligibility sii
8
stationary noise
8
sii model
8
hearing hearing-impaired
8
hearing-impaired listeners
8
standard sii
8
noise
7

Similar Publications

Probing Sensorimotor Memory through the Human Speech-Audiomotor System.

J Neurophysiol

December 2024

Yale Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, USA.

Our knowledge of human sensorimotor learning and memory is predominantly based on the visuo-spatial workspace and limb movements. Humans also have a remarkable ability to produce and perceive speech sounds. We asked if the human speech-auditory system could serve as a model to characterize retention of sensorimotor memory in a workspace which is functionally independent of the visuo-spatial one.

View Article and Find Full Text PDF

Associations of Traumatic Brain Injury and Hearing: Results From the Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS).

J Head Trauma Rehabil

December 2024

Author Affiliations: Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania (Dr Schneider); Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania (Dr Schneider); Department of Psychiatry and Behavioral Sciences, School of Medicine, Johns Hopkins University, Baltimore, Maryland (Dr Kamath); Department of Epidemiology, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland (Drs Reed, Sharrett, Lin, and Deal); The MIND Center, University of Mississippi Medical Center, Jackson, Mississippi (Dr Mosley); National Institute of Neurological Disorders and Stroke Intramural Research Program, Bethesda, Maryland (Dr Gottesman); Department of Otolaryngology, School of Medicine, Johns Hopkins University, Baltimore, Maryland (Drs Lin and Deal); and Cochlear Center for Hearing and Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland (Drs Lin and Deal).

Objective: To examine associations of traumatic brain injury (TBI) with self-reported and clinical measures of hearing function.

Setting: Four US communities.

Participants: A total of 3176 Atherosclerosis Risk in Communities Study participants who attended the sixth study visit in 2016-2017, when hearing was assessed.

View Article and Find Full Text PDF

Harmonic-to-noise ratio as speech biomarker for fatigue: K-nearest neighbour machine learning algorithm.

Med J Armed Forces India

December 2024

Associate Professor, Dayanand Sagar Univerity, Bengaluru, India.

Background: Vital information about a person's physical and emotional health can be perceived in their voice. After sleep loss, altered voice quality is noticed. The circadian rhythm controls the sleep cycle, and when it is askew, it results in fatigue, which is manifested in speech.

View Article and Find Full Text PDF

Audiological performance and subjective satisfaction of the ADHEAR system in experienced pediatric users with unilateral microtia and aural atresia.

Int J Pediatr Otorhinolaryngol

December 2024

Division of Otology, Department of Otorhinolaryngology & Head and Neck Surgery, Linkou Chang Gung Memorial Hospital, Taoyuan, Taiwan; School of Medicine, Chang Gung University, Taoyuan, Taiwan. Electronic address:

Introduction: Despite the reported auditory deficits and developmental challenges in children with unilateral microtia and aural atresia (UMAA), there remains a lack of consensus on early intervention with bone conduction hearing aids (BCHAs) to restore binaural hearing due to the uncertain clinical benefits and intolerability of the conventional devices. Previous studies investigating the auditory benefits under binaural hearing condition were limited and demonstrated controversial opinions in heterogenous patient groups with various devices. Our study aimed to evaluate the audiological performance, including monoaural and binaural hearing, and subjective satisfaction of the ADHEAR system, a novel adhesive BCHA, in experienced pediatric users with UMAA.

View Article and Find Full Text PDF

Comprehension of acoustically degraded emotional prosody in Alzheimer's disease and primary progressive aphasia.

Sci Rep

December 2024

Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK.

Previous research suggests that emotional prosody perception is impaired in neurodegenerative diseases like Alzheimer's disease (AD) and primary progressive aphasia (PPA). However, no previous research has investigated emotional prosody perception in these diseases under non-ideal listening conditions. We recruited 18 patients with AD, and 31 with PPA (nine logopenic (lvPPA); 11 nonfluent/agrammatic (nfvPPA) and 11 semantic (svPPA)), together with 24 healthy age-matched individuals.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!