Burst image restoration methods offer the possibility of recovering faithful scene details from multiple low-quality snapshots captured by hand-held devices in adverse scenarios, thereby attracting increasing attention in recent years. However, individual frames in a burst typically suffer from inter-frame misalignments, leading to ghosting artifacts. Besides, existing methods indiscriminately handle all burst frames, struggling to seamlessly remove the corrupted information due to the neglect of multi-frame spatio-temporal varying degradation. To alleviate these limitations, we propose a general semantic-guided model named SeBIR for burst image restoration incorporating the semantic prior knowledge of Segment Anything Model (SAM) to enable adaptive recovery. Specifically, instead of relying solely on a single aligning scheme, we develop a joint implicit and explicit strategy that sufficiently leverages semantic knowledge as guidance to achieve inter-frame alignment. To further adaptively modulate and aggregate aligned features with spatio-temporal disparity, we elaborate a semantic-guided fusion module using the intermediate semantic features of SAM as an explicit guide to weaken the inherent degradation and strengthen the valuable complementary information across frames. Additionally, a semantic-guided local loss is designed to boost local consistency and image quality. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of our method in both quantitative and qualitative evaluations for burst super-resolution, burst denoising, and burst low-light image enhancement tasks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.106834 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!