Infants perceptually tune to the phonemes of their native languages in the first year of life, thereby losing the ability to discriminate non-native phonemes. Infants who perceptually tune earlier have been shown to develop stronger language skills later in childhood. We hypothesized that socioeconomic disparities, which have been associated with differences in the quality and quantity of language in the home, would contribute to individual differences in phonetic discrimination. Seventy-five infants were assessed on measures of phonetic discrimination at 9 months, on the quality of the home environment at 15 months, and on language abilities at both ages. Phonetic discrimination did not vary according to socioeconomic status (SES), but was significantly associated with the quality of the home environment. This association persisted when controlling for 9-month expressive language abilities, rendering it less likely that infants with better expressive language skills were simply engendering higher quality home interactions. This suggests that infants from linguistically richer home environments may be more tuned to their native language and therefore less able to discriminate non-native contrasts at 9 months relative to infants whose home environments are less responsive. These findings indicate that home language environments may be more critical than SES in contributing to early language perception, with possible implications for language development more broadly.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7458123 | PMC |
http://dx.doi.org/10.1111/infa.12145 | DOI Listing |
Brain Lang
February 2025
Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France; Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000, Lille, France. Electronic address:
Although previous research has shown that speakers adapt on the words they use, it remains unclear whether speakers adapt their phonological representations, leading them to perceive new phonemic contrasts following a social interaction. This event-related potential (ERP) study investigates whether the neuronal responses to the perception of the /e/-/ε/ vowel merger in Northern French speakers show evidence for discriminating /e/ and /ε/ phonemes after interacting with a speaker who produced this contrast. Northern French participants engaged in an interactive map task and we measured their ERP responses elicited after the presentation of a last syllable which was either phonemically identical to or different from preceding syllables.
View Article and Find Full Text PDFBrain
January 2025
Department of Neurology, Medical College of Wisconsin, Milwaukee, WI 53226, USA.
Acoustic-phonetic perception refers to the ability to perceive and discriminate between speech sounds. Acquired impairment of acoustic-phonetic perception is known historically as "pure word deafness" and typically follows bilateral lesions of the cortical auditory system. The extent to which this deficit occurs after unilateral left hemisphere damage and the critical left hemisphere areas involved are not well defined.
View Article and Find Full Text PDFAutism Res
December 2024
Psychiatry and Addictology Department, CIUSSS-NIM Research Center, University of Montreal, Montreal, Quebec, Canada.
Child-directed speech (CDS), which amplifies acoustic and social features of speech during interactions with young children, promotes typical phonetic and language development. In autism, both behavioral and brain data indicate reduced sensitivity to human speech, which predicts absent, decreased, or atypical benefits of exaggerated speech signals such as CDS. This study investigates the impact of exaggerated fundamental frequency (F0) and voice-onset time on the neural processing of speech sounds in 22 Chinese-speaking autistic children aged 2-7 years old with a history of speech delays, compared with 25 typically developing (TD) peers.
View Article and Find Full Text PDFJ Acoust Soc Am
November 2024
Aix Marseille Université, CNRS, LPL, 13 100 Aix-en-Provence, France.
Accentuation is encoded by both durational and pitch cues in French. While previous research agrees that the sole presence of pitch cues is sufficient to encode accentuation in French, the role of durational cues is less clear. In four cue-weighting accent perception experiments, we examined the role of pitch and durational cues in French listeners' perception of accentuation.
View Article and Find Full Text PDFSci Rep
November 2024
Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, 75005, Paris, France.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!