The two-point source longitudinal resolution of three-dimensional integral imaging depends on several factors including the number of sensors, sensor pixel size, pitch between sensors, and the lens point spread function. We assume the two-point sources to be resolved if their point spread functions can be resolved in any one of the sensors. Previous studies of integral imaging longitudinal resolution either rely on geometrical optics formulation or assume the point spread function to be of sub-pixel size, thus neglecting the effect of the lens.
View Article and Find Full Text PDFIntegral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area.
View Article and Find Full Text PDFIn many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging.
View Article and Find Full Text PDFIn this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera.
View Article and Find Full Text PDFImage restoration and denoising has been a challenging problem in optics and computer vision. There has been active research in the optics and imaging communities to develop a robust, data-efficient system for image restoration tasks. Recently, physics-informed deep learning has received wide interest in scientific problems.
View Article and Find Full Text PDFUnderwater scattering caused by suspended particles in the water severely degrades signal detection performance and poses significant challenges to the problem of object detection. This paper introduces an integrated dual-function deep learning-based underwater object detection and classification and temporal signal detection algorithm using three-dimensional (3D) integral imaging (InIm) under degraded conditions. The proposed system is an efficient object classification and temporal signal detection system for degraded environments such as turbidity and partial occlusion and also provides the object range in the scene.
View Article and Find Full Text PDFWe propose a diffuser-based lensless underwater optical signal detection system. The system consists of a lensless one-dimensional (1D) camera array equipped with random phase modulators for signal acquisition and one-dimensional integral imaging convolutional neural network (1DInImCNN) for signal classification. During the acquisition process, the encoded signal transmitted by a light-emitting diode passes through a turbid medium as well as partial occlusion.
View Article and Find Full Text PDFThe two-point-source resolution criterion is widely used to quantify the performance of imaging systems. The two main approaches for the computation of the two-point-source resolution are the detection theoretic and visual analyses. The first assumes a shift-invariant system and lacks the ability to incorporate two different point spread functions (PSFs), which may be required in certain situations like computing axial resolution.
View Article and Find Full Text PDFIntegral imaging (InIm) is useful for passive ranging and 3D visualization of partially-occluded objects. We consider 3D object localization within a scene and in occlusions. 2D localization can be achieved using machine learning and non-machine learning-based techniques.
View Article and Find Full Text PDFThis Feature Issue of Optics Express is organized in conjunction with the 2022 Optica conference on 3D Image Acquisition and Display: Technology, Perception and Applications which was held in hybrid format from 11 to 15, July 2022 as part of the Imaging and Applied Optics Congress and Optical Sensors and Sensing Congress 2022 in Vancouver, Canada. This Feature Issue presents 31 articles which cover the topics and scope of the 2022 3D Image Acquisition and Display conference. This Introduction provides a summary of these published articles that appear in this Feature Issue.
View Article and Find Full Text PDFIn this paper, we have used the angular spectrum propagation method and numerical simulations of a single random phase encoding (SRPE) based lensless imaging system, with the goal of quantifying the spatial resolution of the system and assessing its dependence on the physical parameters of the system. Our compact SRPE imaging system consists of a laser diode that illuminates a sample placed on a microscope glass slide, a diffuser that spatially modulates the optical field transmitting through the input object, and an image sensor that captures the intensity of the modulated field. We have considered two-point source apertures as the input object and analyzed the propagated optical field captured by the image sensor.
View Article and Find Full Text PDFUnderwater optical signal detection performance suffers from occlusion and turbidity in degraded environments. To tackle these challenges, three-dimensional (3D) integral imaging (InIm) with 4D correlation-based and deep-learning-based signal detection approaches have been proposed previously. Integral imaging is a 3D technique that utilizes multiple cameras to capture multiple perspectives of the scene and uses dedicated algorithms to reconstruct 3D images.
View Article and Find Full Text PDFIn this paper, we address the problem of object recognition in degraded environments including fog and partial occlusion. Both long wave infrared (LWIR) imaging systems and LiDAR (time of flight) imaging systems using Azure Kinect, which combine conventional visible and lidar sensing information, have been previously demonstrated for object recognition in ideal conditions. However, the object detection performance of Azure Kinect depth imaging systems may decrease significantly in adverse weather conditions such as fog, rain, and snow.
View Article and Find Full Text PDFIntegral imaging (InIm) has proved useful for three-dimensional (3D) object sensing, visualization, and classification of partially occluded objects. This paper presents an information-theoretic approach for simulating and evaluating the integral imaging capture and reconstruction process. We utilize mutual information (MI) as a metric for evaluating the fidelity of the reconstructed 3D scene.
View Article and Find Full Text PDFWe present an automated method for COVID-19 screening using the intra-patient population distributions of bio-optical attributes extracted from digital holographic microscopy reconstructed red blood cells. Whereas previous approaches have aimed to identify infection by classifying individual cells, here, we propose an approach to incorporate the attribute distribution information from the population of a given human subjects' cells into our classification scheme and directly classify subjects at the patient level. To capture the intra-patient distribution information in a generalized way, we propose an approach based on the Bag-of-Features (BoF) methodology to transform histograms of bio-optical attribute distributions into feature vectors for classification via a linear support vector machine.
View Article and Find Full Text PDFIn this manuscript, we describe the development of a single shot, self-referencing wavefront division, multiplexing digital holographic microscope employing LED sources for large field of view quantitative phase imaging of biological samples. To address the difficulties arising while performing interferometry with low temporally coherent sources, an optical arrangement utilizing multiple Fresnel Biprisms is used for hologram multiplexing, enhancing the field of view and increasing the signal to noise ratio. Biprisms offers the ease of obtaining interference patterns by automatically matching the path length between the two off-axis beams.
View Article and Find Full Text PDFWe present a compact, field portable, lensless, single random phase encoding biosensor for automated classification between healthy and sickle cell disease human red blood cells. Microscope slides containing 3 µl wet mounts of whole blood samples from healthy and sickle cell disease afflicted human donors are input into a lensless single random phase encoding (SRPE) system for disease identification. A partially coherent laser source (laser diode) illuminates the cells under inspection wherein the object complex amplitude propagates to and is pseudorandomly encoded by a diffuser, then the intensity of the diffracted complex waveform is captured by a CMOS image sensor.
View Article and Find Full Text PDFThis Feature Issue of Optics Express is organized in conjunction with the 2021 Optica (OSA) conference on 3D Image Acquisition and Display: Technology, Perception and Applications which was held virtually from 19 to 23, July 2021 as part of the Imaging and Sensing Congress 2021. This Feature Issue presents 29 articles which cover the topics and scope of the 2021 3D conference. This Introduction provides a summary of these articles.
View Article and Find Full Text PDFWe present an automated method for COVID-19 screening based on reconstructed phase profiles of red blood cells (RBCs) and a highly comparative time-series analysis (HCTSA). Video digital holographic data -was obtained using a compact, field-portable shearing microscope to capture the temporal fluctuations and spatio-temporal dynamics of live RBCs. After numerical reconstruction of the digital holographic data, the optical volume is calculated at each timeframe of the reconstructed data to produce a time-series signal for each cell in our dataset.
View Article and Find Full Text PDFTraditionally, long wave infrared imaging has been used in photon starved conditions for object detection and classification. We investigate passive three-dimensional (3D) integral imaging (InIm) in visible spectrum for object classification using deep neural networks in photon-starved conditions and under partial occlusion. We compare the proposed passive 3D InIm operating in the visible domain with that of the long wave infrared sensing in both 2D and 3D imaging cases for object classification in degraded conditions.
View Article and Find Full Text PDFOptical signal detection in turbid and occluded environments is a challenging task due to the light scattering and beam attenuation inside the medium. Three-dimensional (3D) integral imaging is an imaging approach which integrates two-dimensional images from multiple perspectives and has proved to be useful for challenging conditions such as occlusion and turbidity. In this manuscript, we present an approach for the detection of optical signals in turbid water and occluded environments using multidimensional integral imaging employing temporal encoding with deep learning.
View Article and Find Full Text PDFThis Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.
View Article and Find Full Text PDFPolarimetric imaging can become challenging in degraded environments such as low light illumination conditions or in partial occlusions. In this paper, we propose the denoising convolutional neural network (DnCNN) model with three-dimensional (3D) integral imaging to enhance the reconstructed image quality of polarimetric imaging in degraded environments such as low light and partial occlusions. The DnCNN is trained based on the physical model of the image capture in degraded environments to enhance the visualization of polarimetric imaging where simulated low light polarimetric images are used in the training process.
View Article and Find Full Text PDFIn this paper, we introduce a deep learning-based spatio-temporal continuous human gesture recognition algorithm under degraded conditions using three-dimensional (3D) integral imaging. The proposed system is shown as an efficient continuous human gesture recognition system for degraded environments such as partial occlusion. In addition, we compare the performance between the 3D integral imaging-based sensing and RGB-D sensing for continuous gesture recognition under degraded environments.
View Article and Find Full Text PDFIEEE J Biomed Health Inform
March 2022
This study presents a novel approach to automatically perform instant phenotypic assessment of red blood cell (RBC) storage lesion in phase images obtained by digital holographic microscopy. The proposed model combines a generative adversarial network (GAN) with marker-controlled watershed segmentation scheme. The GAN model performed RBC segmentations and classifications to develop ageing markers, and the watershed segmentation was used to completely separate overlapping RBCs.
View Article and Find Full Text PDF