Learning to use an artificial visual cue in speech identification.

J Acoust Soc Am

Department of Psychology and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA.

Published: October 2010

Visual information from a speaker's face profoundly influences auditory perception of speech. However, relatively little is known about the extent to which visual influences may depend on experience, and extent to which new sources of visual speech information can be incorporated in speech perception. In the current study, participants were trained on completely novel visual cues for phonetic categories. Participants learned to accurately identify phonetic categories based on novel visual cues. These newly-learned visual cues influenced identification responses to auditory speech stimuli, but not to the same extent as visual cues from a speaker's face. The novel methods and results of the current study raise theoretical questions about the nature of information integration in speech perception, and open up possibilities for further research on learning in multimodal perception, which may have applications in improving speech comprehension among the hearing-impaired.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2981124PMC
http://dx.doi.org/10.1121/1.3479537DOI Listing

Publication Analysis

Top Keywords

visual cues
16
visual
8
speaker's face
8
extent visual
8
speech perception
8
current study
8
novel visual
8
phonetic categories
8
speech
7
learning artificial
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!