Action Unit Models of Facial Expression of Emotion in the Presence of Speech.

Int Conf Affect Comput Intell Interact Workshops

Section of Biomedical Image Analysis, Department of Radiology, University of Pennsylvania, Philadelphia, PA19104, United States.

Published: September 2013

Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4267560PMC
http://dx.doi.org/10.1109/ACII.2013.15DOI Listing

Publication Analysis

Top Keywords

expression emotion
12
presence speech
12
talking silent
12
action unit
8
facial expression
8
talking
8
emotion
7
models
5
silent
5
unit models
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!