Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size.
View Article and Find Full Text PDFCo-design methods have been introduced to jointly optimize various optical systems along with neural network processing. In the literature, the aperture is generally a fixed parameter although it controls an important trade-off between the depth of focus, the dynamic range, and the noise level in an image. In contrast, we include aperture in co-design by using a differentiable image formation pipeline that models the effect of the aperture on the image noise, dynamic, and blur.
View Article and Find Full Text PDFWe present a novel, to the best of our knowledge, patch-based approach for depth regression from defocus blur. Most state-of-the-art methods for depth from defocus (DFD) use a patch classification approach among a set of potential defocus blurs related to a depth, which induces errors due to the continuous variation of the depth. Here, we propose to adapt a simple classification model using a soft-assignment encoding of the true depth into a membership probability vector during training and a regression scale to predict intermediate depth values.
View Article and Find Full Text PDFOpt Express
October 2021
In this paper we propose a new method to jointly design a sensor and its neural-network based processing. Using a differential ray tracing (DRT) model, we simulate the sensor point-spread function (PSF) and its partial derivative with respect to any of the sensor lens parameters. The proposed ray tracing model makes no thin lens nor paraxial approximation, and is valid for any field of view and point source position.
View Article and Find Full Text PDFIn this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e.
View Article and Find Full Text PDF