Weak consonants (e.g., stops) are more susceptible to noise than vowels, owing partially to their lower intensity. This raises the question whether hearing-impaired (HI) listeners are able to perceive (and utilize effectively) the high-frequency cues present in consonants. To answer this question, HI listeners were presented with clean (noise absent) weak consonants in otherwise noise-corrupted sentences. Results indicated that HI listeners received significant benefit in intelligibility (4 dB decrease in speech reception threshold) when they had access to clean consonant information. At extremely low signal-to-noise ratio (SNR) levels, however, HI listeners received only 64% of the benefit obtained by normal-hearing listeners. This lack of equitable benefit was investigated in Experiment 2 by testing the hypothesis that the high-frequency cues present in consonants were not audible to HI listeners. This was tested by selectively amplifying the noisy consonants while leaving the noisy sonorant sounds (e.g., vowels) unaltered. Listening tests indicated small (∼10%), but statistically significant, improvements in intelligibility at low SNR conditions when the consonants were amplified in the high-frequency region. Selective consonant amplification provided reliable low-frequency acoustic landmarks that in turn facilitated a better lexical segmentation of the speech stream and contributed to the small improvement in intelligibility.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3248061PMC
http://dx.doi.org/10.1121/1.3641407DOI Listing

Publication Analysis

Top Keywords

selective consonant
8
consonant amplification
8
hearing-impaired listeners
8
weak consonants
8
high-frequency cues
8
cues consonants
8
listeners received
8
listeners
7
consonants
6
effects selective
4

Similar Publications

We reanalyzed data originally published by Berman and Friedman (1995), who recorded event related potentials (ERPs) while children and adults with low, medium, and high socioeconomic status (SES) detected oddball auditory targets (tones and consonant-vowel sequences) among distractors. The ERP differential measuring how much attention was allocated to the targets vs. distractors increased significantly with SES, independently of age.

View Article and Find Full Text PDF

Does the type of cleft have an impact on language results? Validation of the Nasalance test in French.

J Stomatol Oral Maxillofac Surg

November 2024

Service de chirurgie maxillofaciale et chirurgie plastique, APHP, Necker Enfants-Malades, Paris 75015, France; Centre de Référence des Fentes et Malformations Faciales, APHP, Necker Enfants-Malades, Paris 75015, France; Université de Paris, UFR de Médecine, Paris 75006, France. Electronic address:

Objectives: The nasometer is the most widely used tool for objective assessment of phonation in both research and clinical practice. French standards have been validated in cases of total cleft lip and palate. The objective of this research is to propose a second validation study on velopalatal clefts.

View Article and Find Full Text PDF

Objectives And Methods: Cochlear implant listeners show difficulties in understanding speech in noise. Channel interactions from activating overlapping neural populations reduce the signal accuracy necessary to interpret complex signals. Optimizing programming strategies based on focused detection thresholds to reduce channel interactions has led to improved performance.

View Article and Find Full Text PDF

Using Twang and Medialization Techniques to Gain Feminine-Sounding Speech in Trans Women.

J Voice

November 2024

Department of Speech and Language Disorders, Statped, Holmestrand, Oslo, Norway. Electronic address:

Objectives: In this study, we introduce an intervention based on two techniques: twang and medialization. The hypothesis is that a combination of these two techniques will enable trans women to gain feminine-sounding speech without vocal strain or harm.

Method: Five trans women took part in the study.

View Article and Find Full Text PDF

Subthalamic nucleus neurons encode syllable sequence and phonetic characteristics during speech.

J Neurophysiol

November 2024

Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, United States.

Speech is a complex behavior that can be used to study unique contributions of the basal ganglia to motor control in the human brain. Computational models suggest that the basal ganglia encode either the phonetic content or the sequence of speech elements. To explore this question, we investigated the relationship between phoneme and sequence features of a spoken syllable triplet and the firing rate of subthalamic nucleus (STN) neurons recorded during the implantation of deep brain stimulation (DBS) electrodes in individuals with Parkinson's disease.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!