Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size.
View Article and Find Full Text PDFCo-design methods have been introduced to jointly optimize various optical systems along with neural network processing. In the literature, the aperture is generally a fixed parameter although it controls an important trade-off between the depth of focus, the dynamic range, and the noise level in an image. In contrast, we include aperture in co-design by using a differentiable image formation pipeline that models the effect of the aperture on the image noise, dynamic, and blur.
View Article and Find Full Text PDFWe present a novel, to the best of our knowledge, patch-based approach for depth regression from defocus blur. Most state-of-the-art methods for depth from defocus (DFD) use a patch classification approach among a set of potential defocus blurs related to a depth, which induces errors due to the continuous variation of the depth. Here, we propose to adapt a simple classification model using a soft-assignment encoding of the true depth into a membership probability vector during training and a regression scale to predict intermediate depth values.
View Article and Find Full Text PDFOpt Express
October 2021
In this paper we propose a new method to jointly design a sensor and its neural-network based processing. Using a differential ray tracing (DRT) model, we simulate the sensor point-spread function (PSF) and its partial derivative with respect to any of the sensor lens parameters. The proposed ray tracing model makes no thin lens nor paraxial approximation, and is valid for any field of view and point source position.
View Article and Find Full Text PDFIn this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e.
View Article and Find Full Text PDFJ Opt Soc Am A Opt Image Sci Vis
October 2021
In this paper, we present a generic performance model able to evaluate the accuracy of depth estimation using depth from defocus (DFD). This model only requires the sensor point spread function at a given depth to evaluate the theoretical accuracy of depth estimation. Hence, it can be used for any (un)conventional system, using either one or several images.
View Article and Find Full Text PDFIn the context of underwater robotics, the visual degradation induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, many underwater localization methods are based on expensive navigation sensors associated with acoustic positioning. On the other hand, pure visual localization methods have shown great potential in underwater localization but the challenging conditions, such as the presence of turbidity and dynamism, remain complex to tackle.
View Article and Find Full Text PDFWe propose to add an optical component in front of a conventional camera to improve depth estimation performance of depth from defocus (DFD), an approach based on the relation between defocus blur and depth. The add-on overcomes ambiguity and the dead zone, which are the fundamental limitations of DFD with a conventional camera, by adding an optical aberration to the whole system that makes the blur unambiguous and measurable for each depth. We look into two optical components: the first one adds astigmatism and the other one chromatic aberration.
View Article and Find Full Text PDFJ Opt Soc Am A Opt Image Sci Vis
December 2014
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD.
View Article and Find Full Text PDF