Modeling Image Patches with a Generic Dictionary of Mini-Epitomes.

Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit

UC Los Angeles,

Published: June 2014

The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an "epitomic footprint" encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4550088PMC
http://dx.doi.org/10.1109/CVPR.2014.264DOI Listing

Publication Analysis

Top Keywords

image patches
12
image classification
12
dictionary mini-epitomes
8
recognition tasks
8
patches support
8
image
7
modeling image
4
patches generic
4
generic dictionary
4
mini-epitomes goal
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!