Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multi-scale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes a Blur-aware Attention Network (BANet), that accomplishes accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2022.3216216DOI Listing

Publication Analysis

Top Keywords

blur-aware attention
8
attention network
8
banet
4
banet blur-aware
4
network dynamic
4
dynamic scene
4
scene deblurring
4
deblurring image
4
image motion
4
motion blur
4

Similar Publications

Ultrasound is a promising imaging method for scoliosis evaluation because it is radiation free and provide real-time images. However, it cannot provide bony details because ultrasound cannot penetrate the bony structure. Therefore, registration of real-time ultrasound images with the previous X-ray radiograph can help physicians understand the spinal deformity of patients.

View Article and Find Full Text PDF

Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multi-scale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!