During a typical communication exchange, both auditory and visual cues contribute to speech comprehension. The influence of vision on speech perception can be measured behaviorally using a task where incongruent auditory and visual speech stimuli are paired to induce perception of a novel token reflective of multisensory integration (i.e., the McGurk effect). This effect is temporally constrained in adults, with illusion perception decreasing as the temporal offset between the auditory and visual stimuli increases. Here, we used the McGurk effect to investigate the development of the temporal characteristics of audiovisual speech binding in 7-24 year-olds. Surprisingly, results indicated that although older participants perceived the McGurk illusion more frequently, no age-dependent change in the temporal boundaries of audiovisual speech binding was observed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neuropsychologia.2016.02.017 | DOI Listing |
Neurophotonics
January 2025
Washington University School of Medicine, Mallinckrodt Institute of Radiology, St. Louis, Missouri, United States.
Significance: Decoding naturalistic content from brain activity has important neuroscience and clinical implications. Information about visual scenes and intelligible speech has been decoded from cortical activity using functional magnetic resonance imaging (fMRI) and electrocorticography, but widespread applications are limited by the logistics of these technologies.
Aim: High-density diffuse optical tomography (HD-DOT) offers image quality approaching that of fMRI but with the silent, open scanning environment afforded by optical methods, thus opening the door to more naturalistic research and applications.
Sci Data
January 2025
Experimental Psychology, University College London, London, United Kingdom.
Communication comprises a wealth of multimodal signals (e.g., gestures, eye gaze, intonation) in addition to speech and there is a growing interest in the study of multimodal language by psychologists, linguists, neuroscientists and computer scientists.
View Article and Find Full Text PDFeNeuro
January 2025
Paris-Lodron-University of Salzburg, Department of Psychology, Centre for Cognitive Neuroscience, Salzburg, Austria
Observing lip movements of a speaker facilitates speech understanding, especially in challenging listening situations. Converging evidence from neuroscientific studies shows stronger neural responses to audiovisual stimuli compared to audio-only stimuli. However, the interindividual variability of this contribution of lip movement information and its consequences on behavior are unknown.
View Article and Find Full Text PDFTrends Hear
January 2025
Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding.
View Article and Find Full Text PDFEar Hear
December 2024
Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!