Purpose: The purpose of this study was to examine how sentence intelligibility relates to self-reported communication in tracheoesophageal speakers when speech intelligibility is measured in quiet and noise.

Method: Twenty-four tracheoesophageal speakers who were at least 1 year postlaryngectomy provided audio recordings of 5 sentences from the Sentence Intelligibility Test. Speakers also completed self-reported measures of communication-the Voice Handicap Index-10 and the Communicative Participation Item Bank short form. Speech recordings were presented to 2 groups of inexperienced listeners who heard sentences in quiet or noise. Listeners transcribed the sentences to yield speech intelligibility scores.

Results: Very weak relationships were found between intelligibility in quiet and measures of voice handicap and communicative participation. Slightly stronger, but still weak and nonsignificant, relationships were observed between measures of intelligibility in noise and both self-reported measures. However, 12 speakers who were more than 65% intelligible in noise showed strong and statistically significant relationships with both self-reported measures (R2 = .76-.79).

Conclusions: Speech intelligibility in quiet is a weak predictor of self-reported communication measures in tracheoesophageal speakers. Speech intelligibility in noise may be a better metric of self-reported communicative function for speakers who demonstrate higher speech intelligibility in noise.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5270639PMC
http://dx.doi.org/10.1044/2016_AJSLP-15-0081DOI Listing

Publication Analysis

Top Keywords

speech intelligibility
24
tracheoesophageal speakers
16
self-reported communication
12
self-reported measures
12
intelligibility noise
12
intelligibility
10
communication measures
8
measures tracheoesophageal
8
sentence intelligibility
8
speakers speech
8

Similar Publications

The cortical tracking of the acoustic envelope is a phenomenon where the brain's electrical activity, as recorded by electroencephalography (EEG) signals, fluctuates in accordance with changes in stimulus intensity (the acoustic envelope of the stimulus). Understanding speech in a noisy background is a key challenge for people with hearing impairments. Speech stimuli are therefore more ecologically valid than clicks, tone pips, or speech tokens (e.

View Article and Find Full Text PDF

When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding.

View Article and Find Full Text PDF

Binaural speech intelligibility in rooms is a complex process that is affected by many factors including room acoustics, hearing loss, and hearing aid (HA) signal processing. Intelligibility is evaluated in this paper for a simulated room combined with a simulated hearing aid. The test conditions comprise three spatial configurations of the speech and noise sources, simulated anechoic and concert hall acoustics, three amounts of multitalker babble interference, the hearing status of the listeners, and three degrees of simulated HA processing provided to compensate for the noise and/or hearing loss.

View Article and Find Full Text PDF

Speech Technology for Automatic Recognition and Assessment of Dysarthric Speech: An Overview.

J Speech Lang Hear Res

January 2025

Centre for Language Studies, Radboud University, Nijmegen, the Netherlands.

Purpose: In this review article, we present an extensive overview of recent developments in the area of dysarthric speech research. One of the key objectives of speech technology research is to improve the quality of life of its users, as evidenced by the focus of current research trends on creating inclusive conversational interfaces that cater to pathological speech, out of which dysarthric speech is an important example. Applications of speech technology research for dysarthric speech demand a clear understanding of the acoustics of dysarthric speech as well as of speech technologies, including machine learning and deep neural networks for speech processing.

View Article and Find Full Text PDF

Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry.

Trends Hear

January 2025

Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.

Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!