Discovering visual dynamics during human actions is a challenging task for human action recognition. To deal with this problem, we theoretically propose the multi-task conditional random fields model and explore its application on human action recognition. For visual representation, we propose the part-induced spatiotemporal action unit sequence to represent each action sample with multiple partwise sequential feature subspaces.
View Article and Find Full Text PDFThis paper proposes a unified framework for multiple/single-view human action recognition. First, we propose the hierarchical partwise bag-of-words representation which encodes both local and global visual saliency based on the body structure cue. Then, we formulate the multiple/single-view human action recognition as a part-regularized multitask structural learning (MTSL) problem which has two advantages on both model learning and feature selection: 1) preserving the consistence between the body-based action classification and the part-based action classification with the complementary information among different action categories and multiple views and 2) discovering both action-specific and action-shared feature subspaces to strengthen the generalization ability of model learning.
View Article and Find Full Text PDF