Identifying a spoken word in a referential context requires both the ability to integrate multimodal input and the ability to reason under uncertainty. How do these tasks interact with one another? We study how adults identify novel words under joint uncertainty in the auditory and visual modalities, and we propose an ideal observer model of how cues in these modalities are combined optimally. Model predictions are tested in four experiments where recognition is made under various sources of uncertainty. We found that participants use both auditory and visual cues to recognize novel words. When the signal is not distorted with environmental noise, participants weight the auditory and visual cues optimally, that is, according to the relative reliability of each modality. In contrast, when one modality has noise added to it, human perceivers systematically prefer the unperturbed modality to a greater extent than the optimal model does. This work extends the literature on perceptual cue combination to the case of word recognition in a referential context. In addition, this context offers a link to the study of multimodal information in word meaning learning.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2019.104092 | DOI Listing |
PLoS One
January 2025
Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, United States of America.
Objective: What we hear may influence postural control, particularly in people with vestibular hypofunction. Would hearing a moving subway destabilize people similarly to seeing the train move? We investigated how people with unilateral vestibular hypofunction and healthy controls incorporated broadband and real-recorded sounds with visual load for balance in an immersive contextual scene.
Design: Participants stood on foam placed on a force-platform, wore the HTC Vive headset, and observed an immersive subway environment.
Disabil Rehabil Assist Technol
January 2025
School of Rehabilitation Therapy, Queen's University, Kingston, Ontario, Canada.
This article explores the existing research evidence on the potential effectiveness of lipreading as a communication strategy to enhance speech recognition in individuals with hearing impairment. A scoping review was conducted, involving a search of six electronic databases (MEDLINE, Embase, Web of Science, Engineering Village, CINAHL, and PsycINFO) for research papers published between January 2013 and June 2023. This study included original research papers with full texts available in English, covering all study designs: qualitative, quantitative, and mixed methods.
View Article and Find Full Text PDFBrain Sci
January 2025
Department of Surgery, Section of Neurosurgery, University of Otago, Dunedin 9016, New Zealand.
The International Classification of Diseases (ICD) has been developed and edited by the World Health Organisation and represents the global standard for recording health information and causes of death. The ICD-11 is the eleventh revision and came into effect on 1 January 2022. Perceptual disturbances refer to abnormalities in the way sensory information is interpreted by the brain, leading to distortions in the perception of reality.
View Article and Find Full Text PDFNeurophotonics
January 2025
Washington University School of Medicine, Mallinckrodt Institute of Radiology, St. Louis, Missouri, United States.
Significance: Decoding naturalistic content from brain activity has important neuroscience and clinical implications. Information about visual scenes and intelligible speech has been decoded from cortical activity using functional magnetic resonance imaging (fMRI) and electrocorticography, but widespread applications are limited by the logistics of these technologies.
Aim: High-density diffuse optical tomography (HD-DOT) offers image quality approaching that of fMRI but with the silent, open scanning environment afforded by optical methods, thus opening the door to more naturalistic research and applications.
Mol Brain
January 2025
Research Centre for Idling Brain Science, University of Toyama, Toyama, 930-0194, Japan.
Cognitive processes such as action planning and decision-making require the integration of multiple sensory modalities in response to temporal cues, yet the underlying mechanism is not fully understood. Sleep has a crucial role for memory consolidation and promoting cognitive flexibility. Our aim is to identify the role of sleep in integrating different modalities to enhance cognitive flexibility and temporal task execution while identifying the specific brain regions that mediate this process.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!