Speech Perception as a Multimodal Phenomenon.

Curr Dir Psychol Sci

University of California, Riverside.

Published: December 2008

Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal speech information could explain the reported automaticity, immediacy, and completeness of audiovisual speech integration. However, recent findings suggest that speech integration can be influenced by higher cognitive properties such as lexical status and semantic context. Proponents of amodal accounts will need to explain these results.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3732050PMC
http://dx.doi.org/10.1111/j.1467-8721.2008.00615.xDOI Listing

Publication Analysis

Top Keywords

speech perception
12
speech integration
12
speech
10
visual speech
8
perception multimodal
4
multimodal phenomenon
4
phenomenon speech
4
perception inherently
4
inherently multimodal
4
multimodal visual
4

Similar Publications

Objectives: An improvement in speech perception is a major well-documented benefit of cochlear implantation (CI), which is commonly discussed with CI candidates to set expectations. However, a large variability exists in speech perception outcomes. We evaluated the accuracy of clinical predictions of post-CI speech perception scores.

View Article and Find Full Text PDF

Listeners with hearing loss have trouble following a conversation in multitalker environments. While modern hearing aids can generally amplify speech, these devices are unable to tune into a target speaker without first knowing to which speaker a user aims to attend. Brain-controlled hearing aids have been proposed using auditory attention decoding (AAD) methods, but current methods use the same model to compare the speech stimulus and neural response, regardless of the dynamic overlap between talkers which is known to influence neural encoding.

View Article and Find Full Text PDF

Interventions That Failed: Factors Associated with the Continuation of Bullying After a Targeted Intervention.

Int J Bullying Prev

April 2023

INVEST Flagship Research Center/Department of Psychology and Speech-Language Pathology, University of Turku, 20014 Turku, Finland.

We examined how often teachers' targeted interventions fail in stopping bullying and to what extent this varies between schools vs. between students involved. In addition, we investigated which student-level factors were associated with intervention failure.

View Article and Find Full Text PDF

Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!