Raindrops adhered to a glass window or camera lens appear in various blurring degrees and resolutions due to the difference in the degrees of raindrops aggregation. The removal of raindrops from a rainy image remains a challenging task because of the density and diversity of raindrops. The abundant location and blur level information are strong prior guide to the task of raindrop removal. However, existing methods use a binary mask to locate and estimate the raindrop with the value 1 (adhesion of raindrops) and 0 (no adhesion), which ignores the diversity of raindrops. Meanwhile, it is noticed that different scale versions of a rainy image have similar raindrop patterns, which makes it possible to employ such complementary information to represent raindrops. In this work, we first propose a soft mask with the value in [-1,1] indicating the blurring level of the raindrops on the background, and explore the positive effect of the blur degree attribute of raindrops on the task of raindrop removal. Secondly, we explore the multi-scale fusion representation for raindrops based on the deep features of the input multi-scale images. The framework is termed uncertainty guided multi-scale attention network (UMAN). Specifically, we construct a multi-scale pyramid structure and introduce an iterative mechanism to extract blur-level information about raindrops to guide the removal of raindrops at different scales. We further introduce the attention mechanism to fuse the input image with the blur-level information, which will highlight raindrop information and reduce the effects of redundant noise. Our proposed method is extensively evaluated on several benchmark datasets and obtains convincing results.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2021.3076283 | DOI Listing |
Radiat Prot Dosimetry
November 2024
Institute of Radiation Emergency Medicine, Department of Radiochemistry and Radioecology, Hirosaki University, 66-1 Hon-cho, Hirosaki, Aomori 036-8564, Japan.
Removing raindrops in images has been addressed as a significant task for various computer vision applications. In this paper, we propose the first method using a dual-pixel (DP) sensor to better address raindrop removal. Our key observation is that raindrops attached to a glass window yield noticeable disparities in DP's left-half and right-half images, while almost no disparity exists for in-focus backgrounds.
View Article and Find Full Text PDFPLoS One
May 2024
Chinese Academy of Forestry, Research Institute of Forestry Policy and Information, Beijing, China.
Single image raindrop removal aims at recovering high-resolution images from degraded ones. However, existing methods primarily employ pixel-level supervision between image pairs to learn spatial features, thus ignoring the more discriminative frequency information. This drawback results in the loss of high-frequency structures and the generation of diverse artifacts in the restored image.
View Article and Find Full Text PDFMicromachines (Basel)
January 2024
School of Electrical and Electronic Engineering, Changchun University of Technology, Changchun 130012, China.
Since machine learning techniques for raindrop removal have not been capable of completely removing raindrops and have failed to take into account the constraints of edge devices with limited resources, a novel software-hardware co-designed method with a memristor for raindrop removal, named memristive attention recurrent residual generative adversarial network (MARR-GAN), is introduced in this research. A novel raindrop-removal network is specifically designed based on attention gate connections and recurrent residual convolutional blocks. By replacing the basic convolution unit with recurrent residual convolution unit, improved capturing of the changes in raindrop appearance over time is achieved, while preserving the position and shape information in the image.
View Article and Find Full Text PDFNeural Netw
September 2023
School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China. Electronic address:
Recently stereo image deraining has attracted lots of attention due to its superiority of abundant information from cross views. Exploring interaction information across stereo views is the key to improving the performance of stereo image deraining. In this paper, we design a general coarse-to-fine deraining framework for stereo rain streak and raindrop removal, called CDINet, comprising a stereo rain removal subnet and a stereo detail recovery subnet to restore images progressively.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!