The infant literature suggests that humans enter the world with impressive built-in talker processing abilities. For example, newborns prefer the sound of their mother's voice over the sound of another woman's voice, and well before their first birthday, infants tune in to language-specific speech cues for distinguishing between unfamiliar talkers. The early childhood literature, however, suggests that preschoolers are unable to learn to identify the voices of two unfamiliar talkers unless these voices are highly distinct from one another, and that adult-level talker recognition does not emerge until children near adolescence. How can we reconcile these apparently paradoxical messages conveyed by the infant and early childhood literatures? Here, we address this question by testing 16.5-month-old infants (N = 80) in three talker recognition experiments. Our results demonstrate that infants at this age have difficulty recognizing unfamiliar talkers, suggesting that talker recognition (associating voices with people) is mastered later in life than talker discrimination (telling voices apart). We conclude that methodological differences across the infant and early childhood literatures-rather than a true developmental discontinuity-account for the performance differences in talker processing between these two age groups. Related findings in other areas of developmental psychology are discussed.

Download full-text PDF

Source
http://dx.doi.org/10.1111/infa.12290DOI Listing

Publication Analysis

Top Keywords

talker recognition
16
unfamiliar talkers
12
early childhood
12
literature suggests
8
talker processing
8
infant early
8
talker
7
resolving apparent
4
apparent talker
4
recognition
4

Similar Publications

How Does Deep Neural Network-Based Noise Reduction in Hearing Aids Impact Cochlear Implant Candidacy?

Audiol Res

December 2024

Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.

Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.

Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.

View Article and Find Full Text PDF

Fluid Intelligence Partially Mediates the Effect of Working Memory on Speech Recognition in Noise.

J Speech Lang Hear Res

January 2025

Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden.

Purpose: Although the existing literature has explored the link between cognitive functioning and speech recognition in noise, the specific role of fluid intelligence still needs to be studied. Given the established association between working memory capacity (WMC) and fluid intelligence and the predictive power of WMC for speech recognition in noise, we aimed to elucidate the mediating role of fluid intelligence.

Method: We used data from the n200 study, a longitudinal investigation into aging, hearing ability, and cognitive functioning.

View Article and Find Full Text PDF
Article Synopsis
  • Word identification accuracy is influenced by factors like word frequency, listening environments, and listener age, with younger and older adults showing different levels of performance, particularly in noisy settings.
  • This study investigates how both age groups perceive speech-in-noise, specifically focusing on medically related terms that vary in familiarity and frequency within simulated hospital noise, highlighting the challenges older adults face.
  • Findings revealed that older adults struggle more with low-familiarity medical words in hospital noise compared to younger adults, emphasizing the need for better communication strategies in healthcare settings.
View Article and Find Full Text PDF

Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.

Design: Fifteen male and 15 female talkers (21.

View Article and Find Full Text PDF

Relationship Between Working Memory, Compression, and Beamformers in Ideal Conditions.

Ear Hear

November 2024

Department of Communication Sciences & Disorders, Northwestern University, Evanston, Illinois, USA.

Objectives: Previous research has shown that speech recognition with different wide dynamic range compression (WDRC) time-constants (fast-acting or Fast and slow-acting or Slow) is associated with individual working memory ability, especially in adverse listening conditions. Until recently, much of this research has been limited to omnidirectional hearing aid settings and colocated speech and noise, whereas most hearing aids are fit with directional processing that may improve the listening environment in spatially separated conditions and interact with WDRC processing. The primary objective of this study was to determine whether there is an association between individual working memory ability and speech recognition in noise with different WDRC time-constants, with and without microphone directionality (binaural beamformer or Beam versus omnidirectional or Omni) in a spatial condition ideal for the beamformer (speech at 0 , noise at 180 ).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!