Two-Stage CNN Model for Joint Demosaicing and Denoising of Burst Bayer Images.

Comput Intell Neurosci

College of System Engineering, National University of Defense Technology, Changsha 410073, China.

Published: April 2022

In the classical image processing pipeline, demosaicing and denoising are separated steps that may interfere with each other. Joint demosaicing and denoising utilizes the shared image prior information to guide the image recovery process. It is expected to have better performance by the joint optimization of the two problems. Besides, learning recovered images from burst (continuous exposure images) can further improve image details. This article proposes a two-stage convolutional neural network model for joint demosaicing and denoising of burst Bayer images. The proposed CNN model consists of a single-frame joint demosaicing and denoising module, a multiframe denoising module, and an optional noise estimation module. It requires a two-stage training scheme to ensure that the model converges to a good solution. Experiments on multiframe Bayer images with simulated Gaussian noise show that the proposed method has obvious performance advantages and speed advantages compared with similar approaches. Experiments on actual multiframe Bayer images verify the denoising effect and detail retention ability of the proposed method.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9001136PMC
http://dx.doi.org/10.1155/2022/6200931DOI Listing

Publication Analysis

Top Keywords

demosaicing denoising
20
joint demosaicing
16
bayer images
16
cnn model
8
model joint
8
denoising burst
8
burst bayer
8
denoising module
8
multiframe bayer
8
proposed method
8

Similar Publications

The widespread usage of high-definition screens on edge devices stimulates a strong demand for efficient image restoration algorithms. The way of caching deep learning models in a look-up table (LUT) is recently introduced to respond to this demand. However, the size of a single LUT grows exponentially with the increase of its indexing capacity, which restricts its receptive field and thus the performance.

View Article and Find Full Text PDF

Impact of ISP Tuning on Object Detection.

J Imaging

November 2023

School of Engineering, University of Galway, University Road, H91 TK33 Galway, Ireland.

In advanced driver assistance systems (ADAS) or autonomous vehicle research, acquiring semantic information about the surrounding environment generally relies heavily on camera-based object detection. Image signal processors (ISPs) in cameras are generally tuned for human perception. In most cases, ISP parameters are selected subjectively and the resulting image differs depending on the individual who tuned it.

View Article and Find Full Text PDF

Event cameras are novel bio-inspired sensors that measure per-pixel brightness differences asynchronously. Recovering brightness from events is appealing since the reconstructed images inherit the high dynamic range (HDR) and high-speed properties of events; hence they can be used in many robotic vision applications and to generate slow-motion HDR videos. However, state-of-the-art methods tackle this problem by training an event-to-image Recurrent Neural Network (RNN), which lacks explainability and is difficult to tune.

View Article and Find Full Text PDF

Restoring high quality images from raw data in low light is challenging due to various noises caused by limited photon count and complicated Image Signal Process (ISP). Although several restoration and enhancement approaches are proposed, they may fail in extreme conditions, such as imaging short exposure raw data. The first path-breaking attempt is to utilize the connection between a pair of short and long exposure raw data and outputs RGB images as the final results.

View Article and Find Full Text PDF

Modern machine learning has enhanced the image quality for consumer and mobile photography through low-light denoising, high dynamic range (HDR) imaging, and improved demosaicing among other applications. While most of these advances have been made for normal lens-based cameras, there has been an emerging body of research for improved photography for lensless cameras using thin optics such as amplitude or phase masks, diffraction gratings, or diffusion layers. These lensless cameras are suited for size and cost-constrained applications such as tiny robotics and microscopy that prohibit the use of a large lens.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!