IEEE Trans Image Process
November 2019
Top-down saliency detection aims to highlight the regions of a specific object category, and typically relies on pixel-wise annotated training data. In this paper, we address the high cost of collecting such training data by a weakly supervised approach to object saliency detection, where only image-level labels, indicating the presence or absence of a target object in an image, are available. The proposed framework is composed of two collaborative CNN modules, an image-level classifier and a pixel-level map generator.
View Article and Find Full Text PDFIEEE Trans Image Process
January 2019
We present a novel computational model for simultaneous image co-saliency detection and co-segmentation that concurrently explores the concepts of saliency and objectness in multiple images. It has been shown that the co-saliency detection via aggregating multiple saliency proposals by diverse visual cues can better highlight the salient objects; however, the optimal proposals are typically region-dependent and the fusion process often leads to blurred results. Co-segmentation can help preserve object boundaries, but it may suffer from complex scenes.
View Article and Find Full Text PDFIEEE Trans Image Process
December 2015
With the aim to improve the performance of feature matching, we present an unsupervised approach for adaptive description selection in the space of homographies. Inspired by the observation that the homographies of correct feature correspondences vary smoothly along the spatial domain, our approach stands on the unsupervised nature of feature matching, and can choose a good descriptor locally for matching each feature point, instead of using one global descriptor. To this end, the homography space serves as the domain for selecting various heterogeneous descriptors.
View Article and Find Full Text PDFIEEE Trans Image Process
April 2014
In this paper, we address the problem of the high annotation cost of acquiring training data for semantic segmentation. Most modern approaches to semantic segmentation are based upon graphical models, such as the conditional random fields, and rely on sufficient training data in form of object contours. To reduce the manual effort on pixel-wise annotating contours, we consider the setting in which the training data set for semantic segmentation is a mixture of a few object contours and an abundant set of bounding boxes of objects.
View Article and Find Full Text PDF