Lipreading and audio-visual speech perception.

Philos Trans R Soc Lond B Biol Sci

MRC Institute of Hearing Research, University Park, Nottingham, U.K.

Published: January 1992

This paper reviews progress in understanding the psychology of lipreading and audio-visual speech perception. It considers four questions. What distinguishes better from poorer lipreaders? What are the effects of introducing a delay between the acoustical and optical speech signals? What have attempts to produce computer animations of talking faces contributed to our understanding of the visual cues that distinguish consonants and vowels? Finally, how should the process of audio-visual integration in speech perception be described; that is, how are the sights and sounds of talking faces represented at their conflux?

Download full-text PDF

Source
http://dx.doi.org/10.1098/rstb.1992.0009DOI Listing

Publication Analysis

Top Keywords

speech perception
12
lipreading audio-visual
8
audio-visual speech
8
talking faces
8
speech
4
perception paper
4
paper reviews
4
reviews progress
4
progress understanding
4
understanding psychology
4

Similar Publications

Purpose: This study investigated the influence of vowel quality on loudness perception and stress judgment in Mongolian, an agglutinative language with free word stress. We aimed to explore the effects of intrinsic vowel features, presentation order, and intensity conditions on loudness perception and stress assignment.

Method: Eight Mongolian short vowel phonemes (/ɐ/, /ə/, /i/, /ɪ/, /ɔ/, /o/, /ʊ/, and /u/) were recorded by a native Mongolian speaker of the Urad subdialect (the Chahar dialect group) in Inner Mongolia.

View Article and Find Full Text PDF

Purpose: Neurotypical individuals show a robust "global precedence effect (GPE)" when processing hierarchically structured visual information. However, the auditory domain remains understudied. The current research serves to fill the knowledge gap on auditory global-local processing across the broader autism phenotype under the tonal language background.

View Article and Find Full Text PDF

Different measures of fundamental frequency and vocal satisfaction among transgender men and women.

Codas

January 2025

Departamento de Saúde Interdisciplinaridade e Reabilitação, Faculdade de Ciências Médicas, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.

Purpose: To verify possible correlations between fo and voice satisfaction among Brazilian transgender people.

Methods: An observational, cross-sectional quantitative study was conducted with the Trans Woman Voice Questionnaire (TWVQ), voice recording (sustained vowel and automatic speech) and extraction of seven acoustic measurements related to fo position and variability in transgender people. Participants were divided into two groups according to gender.

View Article and Find Full Text PDF

Pitch perception in school-aged children: Pure tones, resolved and unresolved harmonics.

JASA Express Lett

January 2025

Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington 98103, USA.

Pitch perception affects children's ability to perceive speech, appreciate music, and learn in noisy environments, such as their classrooms. Here, we investigated pitch perception for pure tones as well as resolved and unresolved complex tones with a fundamental frequency of 400 Hz in 8- to 11-year-old children and adults. Pitch perception in children was better for resolved relative to unresolved complex tones, consistent with adults.

View Article and Find Full Text PDF

Beta oscillations predict the envelope sharpness in a rhythmic beat sequence.

Sci Rep

January 2025

RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.

Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!