Sparsely annotated image segmentation has attracted increasing attention due to its low labeling cost. However, existing weakly-supervised shadow detection methods require complex training procedures, and there is still a significant performance gap compared to fully-supervised methods. This paper summarizes two current challenges in sparsely annotated shadow detection, i.e., weak supervision diffusion and poor structure recovery, and attempts to alleviate them. To this end, we propose a one-stage weakly-supervised learning framework to facilitate sparsely annotated shadow detection. Specifically, we first design a simple yet effective semantic affinity module (SAM) that adaptively propagates scribble supervision to unlabeled regions using a gradient diffusion scheme. Then, to better recover shadow structures, we introduce a feature-guided edge-aware loss, which leverages higher-level semantic relations to perceive shadow boundaries, while avoiding interference from ambiguous regions. Finally, we present an intensity-guided structure consistency loss to ensure that the same images with different brightness are predicted to be consistent shadow masks, which can be regarded as a self-consistent mechanism to improve the model's generalization ability. Experimental results on three benchmark datasets demonstrate that our approach significantly outperforms previous weakly-supervised methods and achieves competitive performance in comparison to recent state-of-the-art fully-supervised methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.106827 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!