Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4415951 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0126059 | PLOS |
Atten Percept Psychophys
January 2025
School of Allied Health and Communicative Disorders, Northern Illinois University, DeKalb, IL, USA.
Speechreading-gathering speech information from talkers' faces-supports speech perception when speech acoustics are degraded. Benefitting from speechreading, however, requires listeners to visually fixate talkers during face-to-face interactions. The purpose of this study is to test the hypothesis that preschool-aged children allocate their eye gaze to a talker when speech acoustics are degraded.
View Article and Find Full Text PDFTrends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFCommun Biol
January 2025
School of Psychology, Shenzhen University, Shenzhen, China.
Speech processing involves a complex interplay between sensory and motor systems in the brain, essential for early language development. Recent studies have extended this sensory-motor interaction to visual word processing, emphasizing the connection between reading and handwriting during literacy acquisition. Here we show how language-motor areas encode motoric and sensory features of language stimuli during auditory and visual perception, using functional magnetic resonance imaging (fMRI) combined with representational similarity analysis.
View Article and Find Full Text PDFInt J Lang Commun Disord
January 2025
Department of Language and Cognition, University College London, London, UK.
Background: Global aphasia is a severe communication disorder affecting all language modalities, commonly caused by stroke. Evidence as to whether the functional communication of people with global aphasia (PwGA) can improve after speech and language therapy (SLT) is limited and conflicting. This is partly because cognition, which is relevant to participation in therapy and implicated in successful functional communication, can be severely impaired in global aphasia.
View Article and Find Full Text PDFDiagnostics (Basel)
December 2024
GITA Lab., Faculty of Engineering, University of Antioquia, Medellín 050010, Colombia.
Background/objectives: Parkinson's disease (PD) affects more than 6 million people worldwide. Its accurate diagnosis and monitoring are key factors to reduce its economic burden. Typical approaches consider either speech signals or video recordings of the face to automatically model abnormal patterns in PD patients.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!