Audiovisual speech perception includes the simultaneous processing of auditory and visual speech. Deficits in audiovisual speech perception are reported in autistic individuals; however, less is known regarding audiovisual speech perception within the broader autism phenotype (BAP), which includes individuals with elevated, yet subclinical, levels of autistic traits. We investigate the neural indices of audiovisual speech perception in adults exhibiting a range of autism-like traits using event-related potentials (ERPs) in a phonemic restoration paradigm. In this paradigm, we consider conditions where speech articulators (mouth and jaw) are present (AV condition) and obscured by a pixelated mask (PX condition). These two face conditions were included in both passive (simply viewing a speaking face) and active (participants were required to press a button for a specific consonant-vowel stimulus) experiments. The results revealed an N100 ERP component which was present for all listening contexts and conditions; however, it was attenuated in the active AV condition where participants were able to view the speaker's face, including the mouth and jaw. The P300 ERP component was present within the active experiment only, and significantly greater within the AV condition compared to the PX condition. This suggests increased neural effort for detecting deviant stimuli when visible articulation was present and visual influence on perception. Finally, the P300 response was negatively correlated with autism-like traits, suggesting that higher autistic traits were associated with generally smaller P300 responses in the active AV and PX conditions. The conclusions support the finding that atypical audiovisual processing may be characteristic of the BAP in adults.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10377560 | PMC |
http://dx.doi.org/10.3390/brainsci13071011 | DOI Listing |
eNeuro
January 2025
Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2024
University of California, San Francisco.
Purpose: We investigate the extent to which automated audiovisual metrics extracted during an affect production task show statistically significant differences between a cohort of children diagnosed with autism spectrum disorder (ASD) and typically developing controls.
Method: Forty children with ASD and 21 neurotypical controls interacted with a multimodal conversational platform with a virtual agent, Tina, who guided them through tasks prompting facial and vocal communication of four emotions-happy, angry, sad, and afraid-under conditions of high and low verbal and social cognitive task demands.
Results: Individuals with ASD exhibited greater standard deviation of the fundamental frequency of the voice with the minima and maxima of the pitch contour occurring at an earlier time point as compared to controls.
Ear Hear
December 2024
Department of Psychology, University of Western Ontario, London, Ontario, Canada.
Objectives: Speech intelligibility is supported by the sound of a talker's voice and visual cues related to articulatory movements. The relative contribution of auditory and visual cues to an integrated audiovisual percept varies depending on a listener's environment and sensory acuity. Cochlear implant users rely more on visual cues than those with acoustic hearing to help compensate for the fact that the auditory signal produced by their implant is poorly resolved relative to that of the typically developed cochlea.
View Article and Find Full Text PDFDigit Health
December 2024
Ostbayerische Technische Hochschule (OTH) Regensburg, Faculty of Health and Social Sciences; Nursing Science, Germany.
Hear Res
January 2025
Department of ENT - Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern 3010 Bern, Switzerland. Electronic address:
Objectives: Understanding brain processing of auditory and visual speech is essential for advancing speech perception research and improving clinical interventions for individuals with hearing impairment. Functional near-infrared spectroscopy (fNIRS) is deemed to be highly suitable for measuring brain activity during language tasks. However, accurate data interpretation also requires validated stimuli and behavioral measures.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!