This paper presents a semantic labeling framework with geodesic propagation (GP). Under the same framework, three algorithms are proposed, including GP, supervised GP (SGP) for image, and hybrid GP (HGP) for video. In these algorithms, we resort to the recognition proposal map and select confident pixels with maximum probability as the initial propagation seeds. From these seeds, the GP algorithm iteratively updates the weights of geodesic distances until the semantic labels are propagated to all pixels. On the contrary, the SGP algorithm further exploits the contextual information to guide the direction of propagation, leading to better performance but higher computational complexity than the GP. For video labeling, we further propose the HGP algorithm, in which the geodesic metric is used in both spatial and temporal spaces. Experiments on four public data sets show that our algorithms outperform several state-of-the-art methods. With the GP framework, convincing results for both image and video semantic labeling can be obtained.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2014.2358193 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!