The quantity, quality, and complexity of language input are important for children's language development. This study examined how the detailed timing of this input relates to children's vocabulary at 3 years of age in 64 mother-child dyads (male = 28; female = 36; White = 69%, Black = 31%). Acoustical analysis of turn taking in mother-child dialogue found that more consistently timed maternal responses (lower response latency variability) were associated ( = .42, < .001) with higher vocabulary (Peabody Picture Vocabulary Test, third edition) scores. In mothers with consistently timed responses, the complexity (mean length of utterance) of their child-directed speech significantly predicted ( = .53, = .002) their children's vocabulary. This suggests that predictably timed contingent maternal responses provide an important learning cue that supports language development beyond the content of language input itself. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

Download full-text PDF

Source
http://dx.doi.org/10.1037/dev0001819DOI Listing

Publication Analysis

Top Keywords

language input
12
language development
8
children's vocabulary
8
consistently timed
8
maternal responses
8
language
5
vocabulary
5
"what" "when"
4
"when" language
4
input
4

Similar Publications

Generative artificial intelligence (AI) technologies have the potential to revolutionise healthcare delivery but require classification and monitoring of patient safety risks. To address this need, we developed and evaluated a preliminary classification system for categorising generative AI patient safety errors. Our classification system is organised around two AI system stages (input and output) with specific error types by stage.

View Article and Find Full Text PDF

Background: Large language models have shown remarkable efficacy in various medical research and clinical applications. However, their skills in medical image recognition and subsequent report generation or question answering (QA) remain limited.

Objective: We aim to finetune a multimodal, transformer-based model for generating medical reports from slit lamp images and develop a QA system using Llama2.

View Article and Find Full Text PDF

Basic Science and Pathogenesis.

Alzheimers Dement

December 2024

Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan.

Background: Alzheimer's disease (AD) has been associated with speech and language impairment. Recent progress in the field has led to the development of automated AD detection using audio-based methods, because it has a great potential for cross-linguistic detection. In this investigation, we utilised a pretrained deep learning model to automatically detect AD, leveraging acoustic data derived from Chinese speech.

View Article and Find Full Text PDF

Dense Paraphrasing for multimodal dialogue interpretation.

Front Artif Intell

December 2024

Computer Science Department, Brandeis University, Waltham, MA, United States.

Multimodal dialogue involving multiple participants presents complex computational challenges, primarily due to the rich interplay of diverse communicative modalities including speech, gesture, action, and gaze. These modalities interact in complex ways that traditional dialogue systems often struggle to accurately track and interpret. To address these challenges, we extend the textual enrichment strategy of Dense Paraphrasing (DP), by translating each nonverbal modality into linguistic expressions.

View Article and Find Full Text PDF

Background: Up to 13% of adolescents suffer from depressive disorders. Despite the high psychological burden, adolescents rarely decide to contact child and adolescent psychiatric services. To provide a low-barrier alternative, our long-term goal is to develop a chatbot for early identification of depressive symptoms.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!