Elizabeth Spelke's is a scholarly presentation of core knowledge theory and a masterful compendium of empirical evidence that supports it. Unfortunately, Spelke's principal theoretical assumption is that core knowledge is simply the innate product of cognitive evolution. As such, her theory fails to explicate the developmental mechanisms underlying the emergence of the cognitive systems on which that knowledge depends.
View Article and Find Full Text PDFWe presented 28 Spanish monolingual and 28 Catalan-Spanish close-language bilingual 5-year-old children with a video of a talker speaking in the children's native language and a nonnative language and examined the temporal dynamics of their selective attention to the talker's eyes and mouth. When the talker spoke in the children's native language, monolinguals attended equally to the eyes and mouth throughout the trial, whereas close-language bilinguals first attended more to the mouth and then distributed attention equally between the eyes and mouth. In contrast, when the talker spoke in a nonnative language (English), both monolinguals and bilinguals initially attended more to the mouth and then gradually shifted to a pattern of equal attention to the eyes and mouth.
View Article and Find Full Text PDFJ Exp Child Psychol
August 2023
The timing of the developmental emergence of holistic face processing and its sensitivity to experience in early childhood are somewhat controversial topics. To investigate holistic face perception in early childhood, we used an online testing platform and administered a two-alternative forced-choice task to 4-, 5-, and 6-year-old children. The children saw pairs of composite faces and needed to decide whether the faces were the same or different.
View Article and Find Full Text PDFExtraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children.
View Article and Find Full Text PDFInfants start tracking auditory-only non-adjacent dependencies (NAD) between 15 and 18 months of age. Given that audiovisual speech, normally available in a talker's mouth, is perceptually more salient than auditory speech and that it facilitates speech processing and language acquisition, we investigated whether 15-month-old infants' NAD learning is modulated by attention to a talker's mouth. Infants performed an audiovisual NAD learning task while we recorded their selective attention to the eyes, mouth, and face of an actress while she spoke an artificial language that followed an AXB structure (tis-X-bun; nal-X-gor) during familiarization.
View Article and Find Full Text PDFLooking to the mouth of a talker early in life predicts expressive communication. We hypothesized that looking at a talker's mouth may signal that infants are ready for increased supported joint engagement and that it subsequently facilitates prelinguistic vocal development and translates to broader gains in expressive communication. We tested this hypothesis in 50 infants aged 6-18 months with heightened and general population-level likelihood of autism diagnosis (Sibs-autism and Sibs-NA; respectively).
View Article and Find Full Text PDFBackground: Due to familial liability, siblings of children with ASD exhibit elevated risk for language delays. The processes contributing to language delays in this population remain unclear.
Methods: Considering well-established links between attention to dynamic audiovisual cues inherent in a speaker's face and speech processing, we investigated if attention to a speaker's face and mouth differs in 12-month-old infants at high familial risk for ASD but without ASD diagnosis (hr-sib; n = 91) and in infants at low familial risk (lr-sib; n = 62) for ASD and whether attention at 12 months predicts language outcomes at 18 months.
Social interactions often involve a cluttered multisensory scene consisting of multiple talking faces. We investigated whether audiovisual temporal synchrony can facilitate perceptual segregation of talking faces. Participants either saw four identical or four different talking faces producing temporally jittered versions of the same visible speech utterance and heard the audible version of the same speech utterance.
View Article and Find Full Text PDFLittle is known about the effects of olfaction on visual processing during infancy. We investigated whether and how an infant's own mother's body odor or another mother's body odor affects 4-month-old infants' looking at their mother's face when it is paired with a stranger's face. In Experiment 1, infants were exposed to their mother's body odor or to a control odor, while in Experiment 2, infants were exposed to a stranger mother's body odor while their visual preferences were recorded.
View Article and Find Full Text PDFChildren with autism spectrum disorder (ASD) display differences in multisensory function as quantified by several different measures. This study estimated the stability of variables derived from commonly used measures of multisensory function in school-aged children with ASD. Participants completed: a simultaneity judgment task for audiovisual speech, tasks designed to elicit the McGurk effect, listening-in-noise tasks, electroencephalographic recordings, and eye-tracking tasks.
View Article and Find Full Text PDFWe investigated whether attention to a talker's eyes in 12 month-old infants is related to their communication and social abilities. We measured infant attention to a talker's eyes and mouth with a Tobii eye-tracker and examined the correlation between attention to the talker's eyes and scores on the Adaptive Behavior Questionnaire from the Bayley Scales of Infant and Toddler Development (BSID-III). Results indicated a positive relationship between eye gaze and scores on the Social and Communication subscales of the BSID-III.
View Article and Find Full Text PDFPrevious findings indicate that bilingual Catalan/Spanish-learning infants attend more to the highly salient audiovisual redundancy cues normally available in a talker's mouth than do monolingual infants. Presumably, greater attention to such cues renders the challenge of learning two languages easier. Spanish and Catalan are, however, rhythmically and phonologically close languages.
View Article and Find Full Text PDFDev Cogn Neurosci
November 2018
Classic views of multisensory processing suggest that cortical sensory regions are specialized. More recent views argue that cortical sensory regions are inherently multisensory. To date, there are no published neuroimaging data that directly test these claims in infancy.
View Article and Find Full Text PDFPrevious studies have found that when monolingual infants are exposed to a talking face speaking in a native language, 8- and 10-month-olds attend more to the talker's mouth, whereas 12-month-olds no longer do so. It has been hypothesized that the attentional focus on the talker's mouth at 8 and 10 months of age reflects reliance on the highly salient audiovisual (AV) speech cues for the acquisition of basic speech forms and that the subsequent decline of attention to the mouth by 12 months of age reflects the emergence of basic native speech expertise. Here, we investigated whether infants may redeploy their attention to the mouth once they fully enter the word-learning phase.
View Article and Find Full Text PDFRecursive, hierarchically organized serial patterns provide the underlying structure in many cognitive and motor domains including speech, language, music, social interaction, and motor action. We investigated whether learning of hierarchical patterns emerges in infancy by habituating 204 infants to different hierarchical serial patterns and then testing for discrimination and generalization of such patterns. Results indicated that 8- to 10-month-old and 12- to 14-month-old infants exhibited sensitivity to the difference between hierarchical and non-hierarchical structure but that 4- to 6-month-old infants did not.
View Article and Find Full Text PDFWe tested 4-6- and 10-12-month-old infants to investigate whether the often-reported decline in infant sensitivity to other-race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own-race or other-race faces which were either accompanied by a speech syllable, no sound, or a non-speech sound. Results indicated that 4-6- and 10-12-month-old infants discriminated own-race as well as other-race faces accompanied by a speech syllable, that only the 10-12-month-olds discriminated silent own-race faces, and that 4-6-month-old infants discriminated own-race and other-race faces accompanied by a non-speech sound but that 10-12-month-old infants only discriminated own-race faces accompanied by a non-speech sound.
View Article and Find Full Text PDFEarly multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS.
View Article and Find Full Text PDFMultisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes.
View Article and Find Full Text PDFPrevious studies have found that infants shift their attention from the eyes to the mouth of a talker when they enter the canonical babbling phase after 6 months of age. Here, we investigated whether this increased attentional focus on the mouth is mediated by audio-visual synchrony and linguistic experience. To do so, we tracked eye gaze in 4-, 6-, 8-, 10-, and 12-month-old infants while they were exposed either to desynchronized native or desynchronized non-native audiovisual fluent speech.
View Article and Find Full Text PDFWe investigated whether the audiovisual speech cues available in a talker's mouth elicit greater attention when adults have to process speech in an unfamiliar language vs. a familiar language. Participants performed a speech-encoding task while watching and listening to videos of a talker in a familiar language (English) or an unfamiliar language (Spanish or Icelandic).
View Article and Find Full Text PDFOne of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack.
View Article and Find Full Text PDFInfants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues.
View Article and Find Full Text PDF