This study measured the influence of masker fluctuations on phoneme recognition. The first part of the study compared the benefit of masker modulations for consonant and vowel recognition in normal-hearing (NH) listeners. Recognition scores were measured in steady-state and sinusoidally amplitude-modulated noise maskers (100% modulation depth) at several modulation rates and signal-to-noise ratios. Masker modulation rates were 4, 8, 16, and 32 Hz for the consonant recognition task and 2, 4, 12, and 32 Hz for the vowel recognition task. Vowel recognition scores showed more modulation benefit and a more pronounced effect of masker modulation rate than consonant scores. The modulation benefit for word recognition from other studies was found to be more similar to the benefit for vowel recognition than that for consonant recognition. The second part of the study measured the effect of modulation rate on the benefit of masker modulations for vowel recognition in aided hearing-impaired (HI) listeners. HI listeners achieved as much modulation benefit as NH listeners for slower masker modulation rates (2, 4, and 12 Hz), but showed a reduced benefit for the fast masker modulation rate of 32 Hz.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.4742718 | DOI Listing |
J Acoust Soc Am
December 2024
Vanderbilt University, Nashville, Tennessee 37232, USA.
This study (1) characterized the effects of channel interaction using spectral blurring, (2) evaluated an image-guided electrode selection (IGES) method aiming to reduce channel interaction, and (3) investigated the impact of electrode placement factors on the change in performance by condition. Twelve adult MED-EL (Innsbruck, Austria) cochlear implant recipients participated. Performance was compared across six conditions: baseline (no blurring), all blurred, apical blurred, middle blurred, basal blurred, and IGES.
View Article and Find Full Text PDFCogn Sci
December 2024
Université Côte d'Azur, CNRS, BCL.
In this paper, we explore the effect of musical expertise on whistled word perception by naive listeners. In whistled words of nontonal languages, vowels are transposed to relatively stable pitches, while consonants are translated into pitch movements or interruptions. Previous behavioral studies have demonstrated that naive listeners can categorize isolated consonants, vowels, and words well over chance.
View Article and Find Full Text PDFCogn Sci
December 2024
Hanyang Institute for Phonetics and Cognitive Science, Department of English Language and Literature, Hanyang University.
This study investigates whether listeners' cue weighting predicts their real-time use of asynchronous acoustic information in spoken word recognition at both group and individual levels. By focusing on the time course of cue integration, we seek to distinguish between two theoretical views: the associated view (cue weighting is linked to cue integration strategy) and the independent view (no such relationship). The current study examines Seoul Korean listeners' (n = 62) weighting of voice onset time (VOT, available earlier in time) and onset fundamental frequency of the following vowel (F0, available later in time) when perceiving Korean stop contrasts (Experiment 1: cue-weighting perception task) and the timing of VOT integration when recognizing Korean words that begin with a stop (Experiment 2: visual-world eye-tracking task).
View Article and Find Full Text PDFDiagnostics (Basel)
November 2024
Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia.
Background/objectives: The present study investigates the reasons for better recognition of disyllabic words in Malayalam among individuals with hearing loss. This research was conducted in three experiments. Experiment 1 measured the psychometric properties (slope, intercept, and maximum scores) of disyllabic wordlists.
View Article and Find Full Text PDFSci Rep
December 2024
James Watt School of Engineering, University of Glasgow, Glasgow, G12 8QQ, UK.
In recent years, Lip-reading has emerged as a significant research challenge. The aim is to recognise speech by analysing Lip movements. The majority of Lip-reading technologies are based on cameras and wearable devices.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!