An experiment was performed to test whether cross-modal speaker matches could be made using isolated visible speech movement information. Visible speech movements were isolated using a point-light technique. In five conditions, subjects were asked to match a voice to one of two (unimodal) speaking point-light faces on the basis of speaker identity. Two of these conditions were designed to maintain the idiosyncratic speech dynamics of the speakers, whereas three of the conditions deleted or distorted the dynamics in various ways. Some of these conditions also equated video frames across dynamically correct and distorted movements. The results revealed generally better matching performance in the conditions that maintained the correct speech dynamics than in those conditions that did not, despite containing exactly the same video frames. The results suggest that visible speech movements themselves can support cross-modal speaker matching.

Download full-text PDF

Source
http://dx.doi.org/10.3758/bf03193658DOI Listing

Publication Analysis

Top Keywords

visible speech
16
cross-modal speaker
12
speaker matching
8
isolated visible
8
speech movements
8
speech dynamics
8
video frames
8
speech
6
conditions
6
hearing face
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!