Infants' learning about words and sounds in relation to objects.

Child Dev

Department of Psychology, University of Chicago, IL 60637, USA.

Published: June 1999

In acquiring language, babies learn not only that people can communicate about objects and events, but also that they typically use a particular kind of act as the communicative signal. The current studies asked whether 1-year-olds' learning of names during joint attention is guided by the expectation that names will be in the form of spoken words. In the first study, 13-month-olds were introduced to either a novel word or a novel sound-producing action (using a small noisemaker). Both the word and the sound were produced by a researcher as she showed the baby a new toy during a joint attention episode. The baby's memory for the link between the word or sound and the object was tested in a multiple choice procedure. Thirteen-month-olds learned both the word-object and sound-object correspondences, as evidenced by their choosing the target reliably in response to hearing the word or sound on test trials, but not on control trials when no word or sound was present. In the second study, 13-month-olds, but not 20-month-olds, learned a new sound-object correspondence. These results indicate that infants initially accept a broad range of signals in communicative contexts and narrow the range with development.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3908446PMC
http://dx.doi.org/10.1111/1467-8624.00006DOI Listing

Publication Analysis

Top Keywords

word sound
16
joint attention
8
study 13-month-olds
8
word
5
infants' learning
4
learning sounds
4
sounds relation
4
relation objects
4
objects acquiring
4
acquiring language
4

Similar Publications

Chronic exposure to traffic noise is associated with increased stress and sleep disruptions. Research on the health consequences of environmental noise, specifically traffic noise, has primarily been conducted in high-income countries (HICs), which have guided the development of noise regulations. The relevance of these findings to policy frameworks in low- and middle-income countries (LMICs) remains uncertain.

View Article and Find Full Text PDF

Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space.

View Article and Find Full Text PDF

Introduction: Apraxia of speech (AOS) is a motor speech disorder characterized by sound distortions, substitutions, deletions, and additions; slow speech rate; abnormal prosody; and/or segmentation between words and syllables. AOS can result from neurodegeneration, in which case it can be accompanied by the primary agrammatic aphasia (PAA), which when presenting together are called AOS+PAA. AOS can also be the sole manifestation of neurodegeneration, termed primary progressive AOS (PPAOS).

View Article and Find Full Text PDF

The impact of talker variability and individual differences on word learning in adults.

Brain Res

January 2025

Department of Communicative Sciences and Disorders, New York University, New York, NY, USA.

Studies have shown that exposure to multiple talkers during learning is beneficial in a variety of spoken language tasks, such as learning speech sounds in a second language and learning novel words in a lab context. However, not all studies find the multiple talker benefit. Some studies have found that processing benefits from exposure to multiple talkers depend on factors related to the linguistic profile of the listeners and to the cognitive demands during learning (blocked versus randomized talkers).

View Article and Find Full Text PDF

Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!