Image denoising algorithms often assume an additive white Gaussian noise (AWGN) process that is independent of the actual RGB values. Such approaches are not fully automatic and cannot effectively remove color noise produced by todays CCD digital camera. In this paper, we propose a unified framework for two tasks: automatic estimation and removal of color noise from a single image using piecewise smooth image models. We introduce the noise level function (NLF), which is a continuous function describing the noise level as a function of image brightness. We then estimate an upper bound of the real noise level function by fitting a lower envelope to the standard deviations of per-segment image variances. For denoising, the chrominance of color noise is significantly removed by projecting pixel values onto a line fit to the RGB values in each segment. Then, a Gaussian conditional random field (GCRF) is constructed to obtain the underlying clean image from the noisy input. Extensive experiments are conducted to test the proposed algorithm, which is shown to outperform state-of-the-art denoising algorithms.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2007.1176 | DOI Listing |
Arq Bras Oftalmol
January 2025
Research Nucleus in Neuroscience and Behavior and Applied Neuroscience, Universidade de São Paulo, São Paulo, SP, Brazil.
Purpose: Amblyopia is a cortical neurological disorder caused by abnormal visual experiences during the critical period for visual development. Recent works have shown that, in addition to the well-known visual alterations, such as changes in visual acuity, several perceptual aspects of vision are affected. This study aims to analyze and compare the effects of different types of amblyopia on visual color processing and determine whether these effects are correlated with visual acuity.
View Article and Find Full Text PDFQuantitative phase imaging (QPI) has become a valuable tool in the field of biomedical research due to its ability to quantify refractive index variations of live cells and tissues. For example, three-dimensional differential phase contrast (3D DPC) imaging uses through-focus images captured under different illumination patterns deconvoluted with a computed 3D phase transfer function (PTF) to reconstruct the 3D refractive index. In conventional 3D DPC with semi-circular illumination, partially spatially coherent illumination often diminishes phase contrast, exacerbating inherent noise, and can lead to a large number of zero values in the 3D PTF, resulting in strong low-frequency artifacts and deteriorating imaging resolution.
View Article and Find Full Text PDFPLoS One
January 2025
Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology Slovak University of Technology in Bratislava, Bratislava, Slovakia.
This paper introduces a novel approach for the offline estimation of stationary moving average processes, further extending it to efficient online estimation of non-stationary processes. The novelty lies in a unique technique to solve the autocorrelation function matching problem leveraging that the autocorrelation function of a colored noise is equal to the autocorrelation function of the coefficients of the moving average process. This enables the derivation of a system of nonlinear equations to be solved for estimating the model parameters.
View Article and Find Full Text PDFFront Neurorobot
January 2025
School of Business, Lingnan University, Hong Kong, China.
With the rapid development of tourism, the concentration of visitor flows poses significant challenges for public safety management, especially in low-light and highly occluded environments, where existing pedestrian detection technologies often struggle to achieve satisfactory accuracy. Although infrared images perform well under low-light conditions, they lack color and detail, making them susceptible to background noise interference, particularly in complex outdoor environments where the similarity between heat sources and pedestrian features further reduces detection accuracy. To address these issues, this paper proposes the FusionU10 model, which combines information from both infrared and visible light images.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Civil Engineering and Engineering Management, National Quemoy University, Kinmen 89250, Taiwan.
Ground-based LiDAR technology has been widely applied in various fields for acquiring 3D point cloud data, including spatial coordinates, digital color information, and laser reflectance intensities (I-values). These datasets preserve the digital information of scanned objects, supporting value-added applications. However, raw point cloud data visually represent spatial features but lack attribute information, posing challenges for automated object classification and effective management.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!