Tracking motion, deformation, and texture using conditionally gaussian processes.

IEEE Trans Pattern Anal Mach Intell

Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139, USA.

Published: February 2010

We present a generative model and inference algorithm for 3D nonrigid object tracking. The model, which we call G-flow, enables the joint inference of 3D position, orientation, and nonrigid deformations, as well as object texture and background texture. Optimal inference under G-flow reduces to a conditionally Gaussian stochastic filtering problem. The optimal solution to this problem reveals a new space of computer vision algorithms, of which classic approaches such as optic flow and template matching are special cases that are optimal only under special circumstances. We evaluate G-flow on the problem of tracking facial expressions and head motion in 3D from single-camera video. Previously, the lack of realistic video data with ground truth nonrigid position information has hampered the rigorous evaluation of nonrigid tracking. We introduce a practical method of obtaining such ground truth data and present a new face video data set that was created using this technique. Results on this data set show that G-flow is much more robust and accurate than current deterministic optic-flow-based approaches.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2008.278DOI Listing

Publication Analysis

Top Keywords

conditionally gaussian
8
video data
8
ground truth
8
data set
8
tracking
4
tracking motion
4
motion deformation
4
deformation texture
4
texture conditionally
4
gaussian processes
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!