SUN: Top-down saliency using natural statistics.

Vis cogn

Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA, USA.

Published: August 2009

When people try to find particular objects in natural scenes they make extensive use of knowledge about how and where objects tend to appear in a scene. Although many forms of such "top-down" knowledge have been incorporated into saliency map models of visual search, surprisingly, the role of object appearance has been infrequently investigated. Here we present an appearance-based saliency model derived in a Bayesian framework. We compare our approach with both bottom-up saliency algorithms as well as the state-of-the-art Contextual Guidance model of Torralba et al. (2006) at predicting human fixations. Although both top-down approaches use very different types of information, they achieve similar performance; each substantially better than the purely bottom-up models. Our experiments reveal that a simple model of object appearance can predict human fixations quite well, even making the same mistakes as people.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2967792PMC
http://dx.doi.org/10.1080/13506280902771138DOI Listing

Publication Analysis

Top Keywords

object appearance
8
human fixations
8
sun top-down
4
saliency
4
top-down saliency
4
saliency natural
4
natural statistics
4
statistics people
4
people find
4
find objects
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!