Under low-light environment, handheld photography suffers from severe camera shake under long exposure settings. Although existing deblurring algorithms have shown promising performance on well-exposed blurry images, they still cannot cope with low-light snapshots. Sophisticated noise and saturation regions are two dominating challenges in practical low-light deblurring: the former violates the Gaussian or Poisson assumption widely used in most existing algorithms and thus degrades their performance badly, while the latter introduces non-linearity to the classical convolution-based blurring model and makes the deblurring task even challenging. In this work, we propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE) to tackle these problems systematically. In terms of algorithm design, INFWIDE proposes a two-branch architecture, which explicitly removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space, and integrates the two complementary outputs with a subtle multi-scale fusion network for high quality night photograph deblurring. For effective network training, we design a set of loss functions integrating a forward imaging model and backward reconstruction to form a close-loop regularization to secure good convergence of the deep neural network. Further, to optimize INFWIDE's applicability in real low-light conditions, a physical-process-based low-light noise model is employed to synthesize realistic noisy night photographs for model training. Taking advantage of the traditional Wiener deconvolution algorithm's physically driven characteristics and deep neural network's representation ability, INFWIDE can recover fine details while suppressing the unpleasant artifacts during deblurring. Extensive experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2023.3244417DOI Listing

Publication Analysis

Top Keywords

feature space
12
wiener deconvolution
12
image feature
8
space wiener
8
deconvolution network
8
low-light conditions
8
deep neural
8
deblurring
7
low-light
6
network
5

Similar Publications

Short linear peptide motifs play important roles in cell signaling. They can act as modification sites for enzymes and as recognition sites for peptide binding domains. SH2 domains bind specifically to tyrosine-phosphorylated proteins, with the affinity of the interaction depending strongly on the flanking sequence.

View Article and Find Full Text PDF

Developing populations of connected neurons often share spatial and/or temporal features that anticipate their assembly. A unifying spatiotemporal motif might link sensory, central, and motor populations that comprise an entire circuit. In the sensorimotor reflex circuit that stabilizes vertebrate gaze, central and motor partners are paired in time (birthdate) and space (dorso-ventral).

View Article and Find Full Text PDF

Unlabelled: Endocytic recycling of transmembrane proteins is essential to cell signaling, ligand uptake, protein traffic and degradation. The intracellular domains of many transmembrane proteins are ubiquitylated, which promotes their internalization by clathrin-mediated endocytosis. How might this enhanced internalization impact endocytic uptake of transmembrane proteins that lack ubiquitylation? Recent work demonstrates that diverse transmembrane proteins compete for space within highly crowded endocytic structures, suggesting that enhanced internalization of one group of transmembrane proteins may come at the expense of other groups.

View Article and Find Full Text PDF

Multidimensional 3D-rendered objects are an important component of vision research and video- gaming applications, but it has remained challenging to parametrically control and efficiently generate those objects. Here, we describe a toolbox for controlling and efficiently generating 3D rendered objects composed of ten separate visual feature dimensions that can be fine-adjusted using python scripts. The toolbox defines objects as multi-dimensional feature vectors with primary dimensions (object body related features), secondary dimensions (head related features) and accessory dimensions (including arms, ears, or beaks).

View Article and Find Full Text PDF

Diverse retinal ganglion cells (RGCs) transmit distinct visual features from the eye to the brain. Recent studies have categorized RGCs into 45 types in mice based on transcriptomic profiles, showing strong alignment with morphological and electrophysiological properties. However, little is known about how these types are spatially arranged on the two-dimensional retinal surface-an organization that influences visual encoding-and how their local microenvironments impact development and neurodegenerative responses.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!