Multiscale image blind denoising.

IEEE Trans Image Process

Published: October 2015

Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2015.2439041DOI Listing

Publication Analysis

Top Keywords

noise model
16
blind denoising
12
denoising algorithm
8
denoising
5
noise
5
model
5
images
5
multiscale image
4
image blind
4
denoising arguably
4

Similar Publications

Aims: Exposure to air pollution including diesel engine exhaust (DEE) is associated with increased risk of acute myocardial infarction (AMI). Few studies have investigated the risk of AMI according to occupational exposure to DEE. The aim of this study was to evaluate the association between occupational exposure to DEE and the risk of first-time AMI.

View Article and Find Full Text PDF

Purpose: The aim of the work is to develop a cascaded diffusion-based super-resolution model for low-resolution (LR) MR tagging acquisitions, which is integrated with parallel imaging to achieve highly accelerated MR tagging while enhancing the tag grid quality of low-resolution images.

Methods: We introduced TagGen, a diffusion-based conditional generative model that uses low-resolution MR tagging images as guidance to generate corresponding high-resolution tagging images. The model was developed on 50 patients with long-axis-view, high-resolution tagging acquisitions.

View Article and Find Full Text PDF

To enhance high-frequency perceptual information and texture details in remote sensing images and address the challenges of super-resolution reconstruction algorithms during training, particularly the issue of missing details, this paper proposes an improved remote sensing image super-resolution reconstruction model. The generator network of the model employs multi-scale convolutional kernels to extract image features and utilizes a multi-head self-attention mechanism to dynamically fuse these features, significantly improving the ability to capture both fine details and global information in remote sensing images. Additionally, the model introduces a multi-stage Hybrid Transformer structure, which processes features at different resolutions progressively, from low resolution to high resolution, substantially enhancing reconstruction quality and detail recovery.

View Article and Find Full Text PDF

Near-Field Clutter Mitigation in Speckle Tracking Echocardiography.

Ultrasound Med Biol

January 2025

Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong; Biomedical Engineering Programme, The University of Hong Kong, Hong Kong. Electronic address:

Objective: Near-field (NF) clutter filters are critical for unveiling true myocardial structure and dynamics. Randomized singular value decomposition (rSVD) stands out for its proven computational efficiency and robustness. This study investigates the effect of rSVD-based NF clutter filtering on myocardial motion estimation.

View Article and Find Full Text PDF

Objective: The study aims to systematically characterize the effect of CT parameter variations on images and lung radiomic and deep features, and to evaluate the ability of different image harmonization methods to mitigate the observed variations.

Approach: A retrospective in-house sinogram dataset of 100 low-dose chest CT scans was reconstructed by varying radiation dose (100%, 25%, 10%) and reconstruction kernels (smooth, medium, sharp). A set of image processing, convolutional neural network (CNNs), and generative adversarial network-based (GANs) methods were trained to harmonize all image conditions to a reference condition (100% dose, medium kernel).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!