Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size.
View Article and Find Full Text PDFWe present a novel, to the best of our knowledge, patch-based approach for depth regression from defocus blur. Most state-of-the-art methods for depth from defocus (DFD) use a patch classification approach among a set of potential defocus blurs related to a depth, which induces errors due to the continuous variation of the depth. Here, we propose to adapt a simple classification model using a soft-assignment encoding of the true depth into a membership probability vector during training and a regression scale to predict intermediate depth values.
View Article and Find Full Text PDFIn this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e.
View Article and Find Full Text PDFWe present an ultracompact infrared cryogenic camera integrated inside a standard Sofradir's detector dewar cooler assembly (DDCA) whose field of view is equal to 120°. The multichannel optical architecture produces four nonredundant images on a single SCORPIO detector with a pixel pitch of 15 μm. This ultraminiaturized optical system brings a very low additional optical and mechanical mass to be cooled in the DDCA: the cool-down time is comparable to an equivalent DDCA without an imagery function.
View Article and Find Full Text PDFJ Opt Soc Am A Opt Image Sci Vis
December 2014
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD.
View Article and Find Full Text PDFIn this paper, we propose a new method for passive depth estimation based on the combination of a camera with longitudinal chromatic aberration and an original depth from defocus (DFD) algorithm. Indeed a chromatic lens, combined with an RGB sensor, produces three images with spectrally variable in-focus planes, which eases the task of depth extraction with DFD. We first propose an original DFD algorithm dedicated to color images having spectrally varying defocus blurs.
View Article and Find Full Text PDFThis paper deals with point target detection in nonstationary backgrounds such as cloud scenes in aerial or satellite imaging. We propose an original spatial detection method based on first- and second-order modeling (i.e.
View Article and Find Full Text PDFJ Opt Soc Am A Opt Image Sci Vis
July 2009
We address performance modeling of superresolution (SR) techniques. Superresolution consists in combining several images of the same scene to produce an image with better resolution and contrast. We propose a discrete data-continuous reconstruction framework to conduct SR performance analysis and derive a theoretical expression of the reconstruction mean squared error (MSE) as a compact, computationally tractable function of signal-to-noise ratio (SNR), scene model, sensor transfer function, number of frames, interframe translation motion, and SR reconstruction filter.
View Article and Find Full Text PDFIEEE Trans Image Process
November 2006
Super-resolution (SR) techniques make use of subpixel shifts between frames in an image sequence to yield higher resolution images. We propose an original observation model devoted to the case of nonisometric inter-frame motion as required, for instance, in the context of airborne imaging sensors. First, we describe how the main observation models used in the SR literature deal with motion, and we explain why they are not suited for nonisometric motion.
View Article and Find Full Text PDFIEEE Trans Image Process
October 2006
Robust estimation of the optical flow is addressed through a multiresolution energy minimization. It involves repeated evaluation of spatial and temporal gradients of image intensity which rely usually on bilinear interpolation and image filtering. We propose to base both computations on a single pyramidal cubic B-spline model of image intensity.
View Article and Find Full Text PDFWe address the issue of distinguishing point objects from a cluttered background and estimating their position by image processing. We are interested in the specific context in which the object's signature varies significantly relative to its random subpixel location because of aliasing. The conventional matched filter neglects this phenomenon and causes a consistent degradation of detection performance.
View Article and Find Full Text PDF