Both multisensory and lexical information are known to influence the perception of speech. However, an open question remains: is either source more fundamental to perceiving speech? In this perspective, we review the literature and argue that multisensory information plays a more fundamental role in speech perception than lexical information. Three sets of findings support this conclusion: first, reaction times and electroencephalographic signal latencies indicate that the effects of multisensory information on speech processing seem to occur earlier than the effects of lexical information. Second, non-auditory sensory input influences the perception of features that differentiate phonetic categories; thus, multisensory information determines what lexical information is ultimately processed. Finally, there is evidence that multisensory information helps form some lexical information as part of a phenomenon known as sound symbolism. These findings support a framework of speech perception that, while acknowledging the influential roles of both multisensory and lexical information, holds that multisensory information is more fundamental to the process.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10800662 | PMC |
http://dx.doi.org/10.3389/fnhum.2023.1331129 | DOI Listing |
Res Dev Disabil
October 2024
Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France. Electronic address:
Developmental dyslexia is characterized by difficulties in learning to read, affecting cognition and causing failure at school. Interventions for children with developmental dyslexia have focused on improving linguistic capabilities (phonics, orthographic and morphological instructions), but developmental dyslexia is accompanied by a wide variety of sensorimotor impairments. The goal of this study was to examine the effects of a proprioceptive intervention on reading performance and eye movement in children with developmental dyslexia.
View Article and Find Full Text PDFFront Hum Neurosci
January 2024
Department of Neurology, Penn State College of Medicine, Hershey, PA, United States.
Both multisensory and lexical information are known to influence the perception of speech. However, an open question remains: is either source more fundamental to perceiving speech? In this perspective, we review the literature and argue that multisensory information plays a more fundamental role in speech perception than lexical information. Three sets of findings support this conclusion: first, reaction times and electroencephalographic signal latencies indicate that the effects of multisensory information on speech processing seem to occur earlier than the effects of lexical information.
View Article and Find Full Text PDFSci Rep
October 2023
NTU Psychology, Nottingham Trent University, Nottingham, UK.
Studies using simple low-level stimuli show that multisensory stimuli lead to greater improvements in processing speed for older adults than young adults. However, there is insufficient evidence to explain how these benefits influence performance for more complex processes such as judgement and memory tasks. This study examined how presenting stimuli in multiple sensory modalities (audio-visual) instead of one (audio-only or visual-only) may help older adults to improve their memory and cognitive processing compared to young adults.
View Article and Find Full Text PDFFront Hum Neurosci
May 2023
Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France.
Introduction: Early exposure to a rich linguistic environment is essential as soon as the diagnosis of deafness is made. Cochlear implantation (CI) allows children to have access to speech perception in their early years. However, it provides only partial acoustic information, which can lead to difficulties in perceiving some phonetic contrasts.
View Article and Find Full Text PDFFront Psychol
May 2023
CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.
Introduction: Naturalistically, multisensory information of gesture and speech is intrinsically integrated to enable coherent comprehension. Such cross-modal semantic integration is temporally misaligned, with the onset of gesture preceding the relevant speech segment. It has been proposed that gestures prime subsequent speech.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!