In order to become proficient native speakers, children have to learn the morpho-syntactic relations between distant elements in a sentence, so-called non-adjacent dependencies (NADs). Previous research suggests that NAD learning in children comprises different developmental stages, where until 2 years of age children are able to learn NADs associatively under passive listening conditions, while starting around the age of 3-4 years children fail to learn NADs during passive listening. To test whether the transition between these developmental stages occurs gradually, we tested children's NAD learning in a foreign language using event-related potentials (ERPs).
View Article and Find Full Text PDFNon-adjacent dependencies (NADs) are important building blocks for language and extracting them from the input is a fundamental part of language acquisition. Prior event-related potential (ERP) studies revealed changes in the neural signature of NAD learning between infancy and adulthood, suggesting a developmental shift in the learning route for NADs. The present study aimed to specify which brain regions are involved in this developmental shift and whether this shift extends to NAD learning in the non-linguistic domain.
View Article and Find Full Text PDFLearning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence.
View Article and Find Full Text PDFBoth social perception and temperament in young infants have been related to social functioning later in life. Previous functional Near-Infrared Spectroscopy (fNIRS) data (Lloyd-Fox et al., 2009) showed larger blood-oxygenation changes for social compared to non-social stimuli in the posterior temporal cortex of five-month-old infants.
View Article and Find Full Text PDFThe neurobiology of birdsong, as a model for human speech, is a pronounced area of research in behavioral neuroscience. Whereas electrophysiology and molecular approaches allow the investigation of either different stimuli on few neurons, or one stimulus in large parts of the brain, blood oxygenation level dependent (BOLD) functional Magnetic Resonance Imaging (fMRI) allows combining both advantages, i.e.
View Article and Find Full Text PDFVocal learning in songbirds and humans occurs by imitation of adult vocalizations. In both groups, vocal learning includes a perceptual phase during which juveniles birds and infants memorize adult vocalizations. Despite intensive research, the neural mechanisms supporting this auditory memory are still poorly understood.
View Article and Find Full Text PDFLike humans, oscine songbirds exhibit vocal learning. They learn their song by imitating conspecifics, mainly adults. Among them, the zebra finch (Taeniopygia guttata) has been widely used as a model species to study the behavioral, cellular and molecular substrates of vocal learning.
View Article and Find Full Text PDFSongbirds provide an excellent model system exhibiting vocal learning associated with an extreme brain plasticity linked to quantifiable behavioral changes. This animal model has thus far been intensively studied using electrophysiological, histological and molecular mapping techniques. However, these approaches do not provide a global view of the brain and/or do not allow repeated measures, which are necessary to establish correlations between alterations in neural substrate and behavior.
View Article and Find Full Text PDF