Can children with autism spectrum disorders "hear" a speaking face?

Child Dev

Haskins Laboratories, New Haven, CT 06511, USA.

Published: February 2012

This study used eye-tracking methodology to assess audiovisual speech perception in 26 children ranging in age from 5 to 15 years, half with autism spectrum disorders (ASD) and half with typical development. Given the characteristic reduction in gaze to the faces of others in children with ASD, it was hypothesized that they would show reduced influence of visual information on heard speech. Responses were compared on a set of auditory, visual, and audiovisual speech perception tasks. Even when fixated on the face of the speaker, children with ASD were less visually influenced than typical development controls. This indicates fundamental differences in the processing of audiovisual speech in children with ASD, which may contribute to their language and communication impairments.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3169706PMC
http://dx.doi.org/10.1111/j.1467-8624.2011.01619.xDOI Listing

Publication Analysis

Top Keywords

audiovisual speech
12
children asd
12
autism spectrum
8
spectrum disorders
8
speech perception
8
typical development
8
children
5
children autism
4
disorders "hear"
4
"hear" speaking
4

Similar Publications

Significance: Decoding naturalistic content from brain activity has important neuroscience and clinical implications. Information about visual scenes and intelligible speech has been decoded from cortical activity using functional magnetic resonance imaging (fMRI) and electrocorticography, but widespread applications are limited by the logistics of these technologies.

Aim: High-density diffuse optical tomography (HD-DOT) offers image quality approaching that of fMRI but with the silent, open scanning environment afforded by optical methods, thus opening the door to more naturalistic research and applications.

View Article and Find Full Text PDF

Communication comprises a wealth of multimodal signals (e.g., gestures, eye gaze, intonation) in addition to speech and there is a growing interest in the study of multimodal language by psychologists, linguists, neuroscientists and computer scientists.

View Article and Find Full Text PDF

Observing lip movements of a speaker facilitates speech understanding, especially in challenging listening situations. Converging evidence from neuroscientific studies shows stronger neural responses to audiovisual stimuli compared to audio-only stimuli. However, the interindividual variability of this contribution of lip movement information and its consequences on behavior are unknown.

View Article and Find Full Text PDF

When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding.

View Article and Find Full Text PDF

Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!