Developmental dyslexia is a specific deficit in reading and spelling that often persists into adulthood. In the present study, we used slow event-related fMRI and independent component analysis to identify brain networks involved in perception of audio-visual speech in a group of adult readers with dyslexia (RD) and a group of fluent readers (FR). Participants saw a video of a female speaker saying a disyllabic word. In the congruent condition, audio and video input were identical whereas in the incongruent condition, the two inputs differed. Participants had to respond to occasionally occurring animal names. The independent components analysis (ICA) identified several components that were differently modulated in FR and RD. Two of these components including fusiform gyrus and occipital gyrus showed less activation in RD compared to FR possibly indicating a deficit to extract face information that is needed to integrate auditory and visual information in natural speech perception. A further component centered on the superior temporal sulcus (STS) also exhibited less activation in RD compared to FR. This finding is corroborated in the univariate analysis that shows less activation in STS for RD compared to FR. These findings suggest a general impairment in recruitment of audiovisual processing areas in dyslexia during the perception of natural speech.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s11682-017-9694-y | DOI Listing |
Infant Behav Dev
January 2025
Department of Basic Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Ciudad Universitaria de Cantoblanco, Iván Pavlov, 6, Madrid 28049, Spain. Electronic address:
Detecting temporal synchrony in audiovisual speech in infancy is fundamental for socio-communicative development, especially for language acquisition. Autism is an early-onset and highly heritable neurodevelopmental condition often associated with language difficulties that usually extend to infants with an elevated likelihood of autism. Early susceptibilities in still unclear basic mechanisms may underlie these difficulties.
View Article and Find Full Text PDFNeural Netw
January 2025
School of automotive studies, Tongji University, Shanghai 201804, China.
Integrating visual features has been proven effective for deep learning-based speech quality enhancement, particularly in highly noisy environments. However, these models may suffer from redundant information, resulting in performance deterioration when the signal-to-noise ratio (SNR) is relatively high. Real-world noisy scenarios typically exhibit widely varying noise levels.
View Article and Find Full Text PDFeNeuro
January 2025
Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg 26129, Germany
A comprehensive analysis of everyday sound perception can be achieved using electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.
View Article and Find Full Text PDFDigit Health
December 2024
Ostbayerische Technische Hochschule (OTH) Regensburg, Faculty of Health and Social Sciences; Nursing Science, Germany.
J Psycholinguist Res
November 2024
Department of Psychology, University of Milan-Bicocca, Piazza Dell'Ateneo Nuovo, 1, 20126, Milan, Italy.
To avoid misunderstandings, ironic speakers may accompany their ironic remarks with a particular intonation and specific facial expressions that signal that the message should not be taken at face value. The acoustic realization of the ironic tone of voice differs from language to language, whereas the ironic face manifests the speaker's negative stance and might thus have a universal basis. We conducted a study on 574 participants speaking 6 different languages (French, German, Dutch, English, Mandarin, and Italian-the control group) to verify whether they could recognize ironic remarks uttered in Italian in three different modalities: watching muted videos, listening to audio tracks, and when both cues were present.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!