Neural processes underlying perceptual enhancement by visual speech gestures.

Neuroreport

Human Information Science Laboratories, ATR International, Kyoto, Japan.

Published: December 2003

This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.

Download full-text PDF

Source
http://dx.doi.org/10.1097/00001756-200312020-00016DOI Listing

Publication Analysis

Top Keywords

visual speech
12
perceptual enhancement
8
brain regions
8
regions involved
8
speech
6
visual
5
neural processes
4
processes underlying
4
underlying perceptual
4
enhancement visual
4

Similar Publications

Age-related hearing loss (ARHL) is considered one of the most common neurodegenerative disorders in the elderly; however, how it contributes to cognitive decline is poorly understood. With resting-state functional magnetic resonance imaging from 66 individuals with ARHL and 54 healthy controls, group spatial independent component analyses, sliding window analyses, graph-theory methods, multilayer networks, and correlation analyses were used to identify ARHL-induced disturbances in static and dynamic functional network connectivity (sFNC/dFNC), alterations in global network switching and their links to cognitive performances. ARHL was associated with decreased sFNC/dFNC within the default mode network (DMN) and increased sFNC/dFNC between the DMN and central executive, salience (SN), and visual networks.

View Article and Find Full Text PDF

Perception and production of music and speech rely on auditory-motor coupling, a mechanism which has been linked to temporally precise oscillatory coupling between auditory and motor regions of the human brain, particularly in the beta frequency band. Recently, brain imaging studies using magnetoencephalography (MEG) have also shown that accurate auditory temporal predictions specifically depend on phase coherence between auditory and motor cortical regions. However, it is not yet clear whether this tight oscillatory phase coupling is an intrinsic feature of the auditory-motor loop, or whether it is only elicited by task demands.

View Article and Find Full Text PDF

Purpose: Research on vestibular function tests has advanced significantly over the past century. This study aims to evaluate research productivity, identify top contributors, and assess global collaboration to provide a comprehensive overview of trends and advancements in the field.

Method: A scientometric analysis was conducted using publications from the Scopus database, retrieved on January 5, 2024.

View Article and Find Full Text PDF

Salient Voice Symptoms in Primary Muscle Tension Dysphonia.

J Voice

January 2025

School of Behavioral and Brain Sciences, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, TX; Department of Otolaryngology - Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, TX. Electronic address:

Introduction: Patients with primary muscle tension dysphonia (pMTD) commonly report symptoms of vocal effort, fatigue, discomfort, odynophonia, and aberrant vocal quality (eg, vocal strain, hoarseness). However, voice symptoms most salient to pMTD have not been identified. Furthermore, how standard vocal fatigue and vocal tract discomfort indices that capture persistent symptoms-like the Vocal Fatigue Index (VFI) and Vocal Tract Discomfort Scale (VTDS)-relate to acute symptoms experienced at the time of the voice evaluation is unclear.

View Article and Find Full Text PDF

A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!