Blind image inpainting involves two critical aspects, i.e., "where to inpaint" and "how to inpaint". Knowing "where to inpaint" can eliminate the interference arising from corrupted pixel values; a good "how to inpaint" strategy yields high-quality inpainted results robust to various corruptions. In existing methods, these two aspects usually lack explicit and separate consideration. This paper fully explores these two aspects and proposes a self-prior guided inpainting network (SIN). The self-priors are obtained by detecting semantic-discontinuous regions and by predicting global semantic structures of the input image. On the one hand, the self-priors are incorporated into the SIN, which enables the SIN to perceive valid context information from uncorrupted regions and to synthesize semantic-aware textures for corrupted regions. On the other hand, the self-priors are reformulated to provide a pixel-wise adversarial feedback and a high-level semantic structure feedback, which can promote the semantic continuity of inpainted images. Experimental results demonstrate that our method achieves state-of-the-art performance in metric scores and in visual quality. It has an advantage over many existing methods that assume "where to inpaint" is known in advance. Extensive experiments on a series of related image restoration tasks validate the effectiveness of our method in obtaining high-quality inpainting.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2023.3284431DOI Listing

Publication Analysis

Top Keywords

"where inpaint"
12
self-prior guided
8
blind image
8
image inpainting
8
"how inpaint"
8
existing methods
8
hand self-priors
8
inpaint"
5
guided pixel
4
pixel adversarial
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!