For flash guided non-flash image denoising, the main challenge is to explore the consistency prior between the two modalities. Most existing methods attempt to model the flash/non-flash consistency in pixel level, which may easily lead to blurred edges. Different from these methods, we have an important finding in this paper, which reveals that the modality gap between flash and non-flash images conforms to the Laplacian distribution in gradient domain. Based on this finding, we establish a Laplacian gradient consistency (LGC) model for flash guided non-flash image denoising. This model is demonstrated to have faster convergence speed and denoising accuracy than the traditional pixel consistency model. Through solving the LGC model, we further design a deep network namely LGCNet. Different from existing image denoising networks, each component of the LGCNet strictly matches the solution of LGC model, giving the network good interpretability. The performance of the proposed LGCNet is evaluated on three different flash/non-flash image datasets, which demonstrates its superior denoising performance over many state-of-the-art methods both quantitatively and qualitatively. The intermediate features are also visualized to verify the effectiveness of the Laplacian gradient consistency prior. The source codes are available at https://github.com/JingyiXu404/LGCNet.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2024.3489275DOI Listing

Publication Analysis

Top Keywords

image denoising
16
laplacian gradient
12
gradient consistency
12
consistency prior
12
flash guided
12
guided non-flash
12
non-flash image
12
lgc model
12
consistency
6
denoising
6

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!