IEEE Trans Image Process
Published: December 2024
Motion deblurring is a highly ill-posed problem due to the significant loss of motion information in the blurring process. Complementary informative features from auxiliary sensors such as event cameras can be explored for guiding motion deblurring. The event camera can capture rich motion information asynchronously with microsecond accuracy. In this paper, a novel frame-event fusion framework is proposed for event-driven motion deblurring (FEF-Deblur), which can sufficiently explore long-range cross-modal information interactions. Firstly, different modalities are usually complementary and also redundant. Cross-modal fusion is modeled as complementary-unique features separation-and-aggregation, avoiding the modality redundancy. Unique features and complementary features are first inferred with parallel intra-modal self-attention and inter-modal cross-attention respectively. After that, a correlation-based constraint is designed to act between unique and complementary features to facilitate their differentiation, which assists in cross-modal redundancy suppression. Additionally, spatio-temporal dependencies among neighboring inputs are crucial for motion deblurring. A recurrent cross attention is introduced to preserve inter-input attention information, in which the current spatial features and aggregated temporal features are attending to each other by establishing the long-range interaction between them. Extensive experiments on both synthetic and real-world motion deblurring datasets demonstrate our method outperforms state-of-the-art event-based and image/video-based methods. The code will be made publicly available.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2024.3512362 | DOI Listing |
IEEE Trans Image Process
December 2024
Image degradation caused by noise and blur remains a persistent challenge in imaging systems, stemming from limitations in both hardware and methodology. Single-image solutions face an inherent tradeoff between noise reduction and motion blur. While short exposures can capture clear motion, they suffer from noise amplification.
View Article and Find Full Text PDFMotion deblurring is a highly ill-posed problem due to the significant loss of motion information in the blurring process. Complementary informative features from auxiliary sensors such as event cameras can be explored for guiding motion deblurring. The event camera can capture rich motion information asynchronously with microsecond accuracy.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
April 2025
Effective video frame interpolation hinges on the adept handling of motion in the input scene. Prior work acknowledges asynchronous event information for this, but often overlooks whether motion induces blur in the video, limiting its scope to sharp frame interpolation. We instead propose a unified framework for event-based frame interpolation that performs deblurring ad-hoc and thus works both on sharp and blurry input videos.
View Article and Find Full Text PDFAcad Radiol
January 2025
Department of Radiology, Washington University School of Medicine, 510 S. Kingshighway Blvd, St. Louis, MO 63110 (S.I., M.A.T., M.I., C.S., R.L., A.H., R.L.W., T.J.F.). Electronic address:
Rationale And Objective: Conventional positron emission tomography (PET) respiratory gating utilizes a fraction of acquired PET counts (i.e., optimal gate [OG]), whereas elastic motion correction with deblurring (EMCD) utilizes all PET counts to reconstruct motion-corrected images without increasing image noise.
View Article and Find Full Text PDFComput Vis ECCV
November 2024
University of Minnesota, Minneapolis.
Diffusion models have emerged as powerful generative techniques for solving inverse problems. Despite their success in a variety of inverse problems in imaging, these models require many steps to converge, leading to slow inference time. Recently, there has been a trend in diffusion models for employing sophisticated noise schedules that involve more frequent iterations of timesteps at lower noise levels, thereby improving image generation and convergence speed.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!
© LitMetric 2025. All rights reserved.