Face viewing behavior predicts multisensory gain during speech perception.

Psychon Bull Rev

Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, 1 Baylor Plaza Suite S104, Houston, TX, 77030, USA.

Published: February 2020

Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2-58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3-98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7004844PMC
http://dx.doi.org/10.3758/s13423-019-01665-yDOI Listing

Publication Analysis

Top Keywords

noisy audiovisual
12
face viewing
8
multisensory gain
8
audiovisual speech
8
fixate mouth
8
viewed face
8
time spent
8
spent fixating
8
fixating mouth
8
face
5

Similar Publications

Objectives: Speech intelligibility is supported by the sound of a talker's voice and visual cues related to articulatory movements. The relative contribution of auditory and visual cues to an integrated audiovisual percept varies depending on a listener's environment and sensory acuity. Cochlear implant users rely more on visual cues than those with acoustic hearing to help compensate for the fact that the auditory signal produced by their implant is poorly resolved relative to that of the typically developed cochlea.

View Article and Find Full Text PDF

Purpose: This study aims to evaluate the effect of auditory neuropathy spectrum disorder (ANSD) on postoperative auditory perception and listening difficulties in pediatric cochlear implant (CI) recipients.

Method: The Children's Auditory Perception Test (CAPT) assesses auditory perception skills, and the Children's Home Inventory of Listening Difficulties (CHILD) Scale evaluates daily listening difficulties. The study involved pediatric CI recipients ( = 40) aged between 5 and 7 years, with and without diagnosis of ANSD.

View Article and Find Full Text PDF

Many real-life situations can be extremely noisy, which makes it difficult to understand what people say. Here, we introduce a novel audiovisual virtual reality experimental platform to study the behavioral and neurophysiological consequences of background noise on processing continuous speech in highly realistic environments. We focus on a context where the ability to understand speech is particularly important: the classroom.

View Article and Find Full Text PDF

Objectives: A recent study has provided empirical support for the use of remote microphone (RM) systems to improve listening-in-noise performance of autistic youth. It has been proposed that RM system effects might be achieved by boosting engagement in this population. The present study used behavioral coding to test this hypothesis in autistic and nonautistic youth listening in an ecologically valid, noisy environment.

View Article and Find Full Text PDF

A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts.

PLoS Comput Biol

September 2024

Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas, United States of America.

Communication in the real world is inherently multimodal. When having a conversation, typically sighted and hearing people use both auditory and visual cues to understand one another. For example, objects may make sounds as they move in space, or we may use the movement of a person's mouth to better understand what they are saying in a noisy environment.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!