The infant literature suggests that humans enter the world with impressive built-in talker processing abilities. For example, newborns prefer the sound of their mother's voice over the sound of another woman's voice, and well before their first birthday, infants tune in to language-specific speech cues for distinguishing between unfamiliar talkers. The early childhood literature, however, suggests that preschoolers are unable to learn to identify the voices of two unfamiliar talkers unless these voices are highly distinct from one another, and that adult-level talker recognition does not emerge until children near adolescence. How can we reconcile these apparently paradoxical messages conveyed by the infant and early childhood literatures? Here, we address this question by testing 16.5-month-old infants (N = 80) in three talker recognition experiments. Our results demonstrate that infants at this age have difficulty recognizing unfamiliar talkers, suggesting that talker recognition (associating voices with people) is mastered later in life than talker discrimination (telling voices apart). We conclude that methodological differences across the infant and early childhood literatures-rather than a true developmental discontinuity-account for the performance differences in talker processing between these two age groups. Related findings in other areas of developmental psychology are discussed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/infa.12290 | DOI Listing |
Audiol Res
December 2024
Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.
Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.
Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.
J Speech Lang Hear Res
January 2025
Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
Purpose: Although the existing literature has explored the link between cognitive functioning and speech recognition in noise, the specific role of fluid intelligence still needs to be studied. Given the established association between working memory capacity (WMC) and fluid intelligence and the predictive power of WMC for speech recognition in noise, we aimed to elucidate the mediating role of fluid intelligence.
Method: We used data from the n200 study, a longitudinal investigation into aging, hearing ability, and cognitive functioning.
Cogn Res Princ Implic
December 2024
Division of Geriatrics, Gerontology and Palliative Medicine, University of Nebraska Medical Center Department of Internal Medicine, Omaha, USA.
J Speech Lang Hear Res
January 2025
Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE.
Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.
Design: Fifteen male and 15 female talkers (21.
Ear Hear
November 2024
Department of Communication Sciences & Disorders, Northwestern University, Evanston, Illinois, USA.
Objectives: Previous research has shown that speech recognition with different wide dynamic range compression (WDRC) time-constants (fast-acting or Fast and slow-acting or Slow) is associated with individual working memory ability, especially in adverse listening conditions. Until recently, much of this research has been limited to omnidirectional hearing aid settings and colocated speech and noise, whereas most hearing aids are fit with directional processing that may improve the listening environment in spatially separated conditions and interact with WDRC processing. The primary objective of this study was to determine whether there is an association between individual working memory ability and speech recognition in noise with different WDRC time-constants, with and without microphone directionality (binaural beamformer or Beam versus omnidirectional or Omni) in a spatial condition ideal for the beamformer (speech at 0 , noise at 180 ).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!