The goal of the present research was to determine how well observers utilize cues to consonant identification from different spectral regions that occur asynchronously as opposed to synchronously across frequency; such an ability would be useful for processing speech in the context of a spectro-temporally complex masker (e.g., competing speech). This was assessed by obtaining masked identification thresholds for VCV speech material, of the form /a/ C /a/, under various conditions of 10-Hz or 20-Hz square-wave amplitude modulation (AM). The speech tokens were filtered into 2, 4, 8, or 16 contiguous log-spaced frequency bands spanning 0.1 to 10 kHz. Bands were then modulated, with the pattern of that AM being either coherent across bands or 180 degrees out of phase for adjacent bands. In the out-of-phase conditions the odd-numbered bands had coherent phase AM and the even bands had coherent phase AM, but the AM pattern across these two subsets of bands were out of phase. Results from these two conditions, along with further conditions employing only modulated even- or odd-numbered bands, allowed performance to be compared between stimuli characterized by synchronous and asynchronous cues. Results indicate that observers are able to utilize asynchronously presented cues to consonant identification efficiently across a range of conditions.

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.1691035DOI Listing

Publication Analysis

Top Keywords

cues consonant
12
consonant identification
12
synchronous asynchronous
8
asynchronous cues
8
observers utilize
8
bands
8
odd-numbered bands
8
bands coherent
8
coherent phase
8
conditions
5

Similar Publications

Purpose: The objective of the present study is to investigate nasal and oral vowel production in French-speaking children with cochlear implants (CIs) and children with typical hearing (TH). Vowel nasality relies primarily on acoustic cues that may be less effectively transmitted by the implant. The study investigates how children with CIs manage to produce these segments in French, a language with contrastive vowel nasalization.

View Article and Find Full Text PDF

Despite the interest in animacy perception, few studies have considered sensory modalities other than vision. However, even everyday experience suggests that the auditory sense can also contribute to the recognition of animate beings, for example through the identification of voice-like sounds or through the perception of sounds that are the by-products of locomotion. Here we review the studies that have investigated the responses of humans and other animals to different acoustic features that may indicate the presence of a living entity, with particular attention to the neurophysiological mechanisms underlying such perception.

View Article and Find Full Text PDF

Introduction: Anatomy-based fitting (ABF), a relatively new technique for cochlear implant (CI) programming, attempts to lessen the impact of the electrode insertion location-related frequency-to-place mismatch (FPM). This study aimed to compare vowels and consonant perception in quiet and in noise among experienced adult CI users using the ABF and the regular, conventional-based fitting (CBF) map (pre-ABF) over 6 months.

Methods: Nine ears from eight experienced adult CI users were included in the experimental and longitudinal research.

View Article and Find Full Text PDF

Mapping the spectrotemporal regions influencing perception of French stop consonants in noise.

Sci Rep

November 2024

Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, 75005, Paris, France.

Article Synopsis
  • This study investigates how listeners decode French stop consonants amid background noise, using a reverse-correlation approach for detailed analysis.
  • Thirty-two participants completed a discrimination task, allowing researchers to map the specific acoustic cues they relied on, such as formant transitions and voicing cues.
  • The findings highlight the complexity of speech perception, revealing that individuals utilize a variety of cues with significant differences in how each person processes sounds.
View Article and Find Full Text PDF

Introduction: Electric-acoustic stimulation (EAS) provides cochlear implant (CI) recipients with preserved low-frequency acoustic hearing in the implanted ear affording auditory cues not reliably transmitted by the CI including fundamental frequency, temporal fine structure, and interaural time differences (ITDs). A prospective US multicenter clinical trial was conducted examining the safety and effectiveness of a hybrid CI for delivering EAS.

Materials And Methods: Fifty-two adults (mean age 59.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!