Audiovisual synchrony detection for fluent speech in early childhood: An eye-tracking study.

Psych J

Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.

Published: June 2022

During childhood, the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables. However, little is known about how children perceive synchrony for natural and continuous speech. This study investigated young children's gaze patterns while they were watching movies of two identical speakers telling stories side by side. Only one speaker's lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms. Children aged 3-6 years (n = 94, 52.13% males) showed an overall preference for the synchronous speaker, with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages. However, viewing time to the synchronous speech was significantly longer in the auditory-leading (AL) condition compared with that in the visual-leading (VL) condition, suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood. When further examining gaze patterns on dynamic faces, we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos. Attention to detail, one dimension of autistic traits featured by local processing, has been found to be correlated with worse performances in speech synchrony processing. These findings extended previous research by showing the development of speech synchrony perception in young children, and may have implications for clinical populations (e.g., autism) with impaired multisensory integration.

Download full-text PDF

Source
http://dx.doi.org/10.1002/pchj.538DOI Listing

Publication Analysis

Top Keywords

gaze patterns
12
audiovisual synchrony
8
early childhood
8
viewing time
8
time synchronous
8
speech synchrony
8
speech
6
synchrony detection
4
detection fluent
4
fluent speech
4

Similar Publications

Diagnosis of Parkinson's disease by eliciting trait-specific eye movements in multi-visual tasks.

J Transl Med

January 2025

School of Information and Communication Engineering, Dalian University of Technology, No. 2 Linggong Road, 116024, Dalian, China.

Background: Parkinson's Disease (PD) is a neurodegenerative disorder, and eye movement abnormalities are a significant symptom of its diagnosis. In this paper, we developed a multi-task driven by eye movement in a virtual reality (VR) environment to elicit PD-specific eye movement abnormalities. The abnormal features were subsequently modeled by using the proposed deep learning algorithm to achieve an auxiliary diagnosis of PD.

View Article and Find Full Text PDF

Speechreading-gathering speech information from talkers' faces-supports speech perception when speech acoustics are degraded. Benefitting from speechreading, however, requires listeners to visually fixate talkers during face-to-face interactions. The purpose of this study is to test the hypothesis that preschool-aged children allocate their eye gaze to a talker when speech acoustics are degraded.

View Article and Find Full Text PDF

Increased attention towards progress information near a goal state.

Psychon Bull Rev

January 2025

Department of Psychology, McGill University, 2001 Av. McGill College, Montréal, QC, H3A 1G1, Canada.

A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal-as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task.

View Article and Find Full Text PDF

How are arbitrary sequences of verbal information retained and manipulated in working memory? Increasing evidence suggests that serial order in verbal WM is spatially coded and that spatial attention is involved in access and retrieval. Based on the idea that brain areas controlling spatial attention are also involved in oculomotor control, we used eye tracking to reveal how the spatial structure of serial order information is accessed in verbal working memory. In two experiments, participants memorized a sequence of auditory words in the correct order.

View Article and Find Full Text PDF

Multi-item retro-cueing effects refer to better working memory performance for multiple items when they are cued after their offset compared to a neutral condition in which all items are cued. However, several studies have reported boundary conditions, and findings have also sometimes failed to replicate. We hypothesized that a strategy to focus on only one of the cued items could possibly yield these inconsistent patterns.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!