Publications by authors named "Dinesh Rajan"

One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply a superquadric fitting framework for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor. Our work demonstrates that an average error of less than 1 cm is possible for a box with the largest dimension of about 30 cm and a cylinder with the largest dimension of about 20 cm that are each placed 1.

View Article and Find Full Text PDF

The behavior of multicamera interference in 3D images (e.g., depth maps), which is based on infrared (IR) light, is not well understood.

View Article and Find Full Text PDF

Synthetically creating motion blur in two-dimensional (2D) images is a well-understood process and has been used in image processing for developing deblurring systems. There are no well-established techniques for synthetically generating arbitrary motion blur within three-dimensional (3D) images, such as depth maps and point clouds since their behavior is not as well understood. As a prerequisite, we have previously developed a method for generating synthetic motion blur in a plane that is parallel to the sensor detector plane.

View Article and Find Full Text PDF

Accurate three-dimensional displacement measurements of bridges and other structures have received significant attention in recent years. The main challenges of such measurements include the cost and the need for a scalable array of instrumentation. This paper presents a novel Hybrid Inertial Vision-Based Displacement Measurement (HIVBDM) system that can measure three-dimensional structural displacements by using a monocular charge-coupled device (CCD) camera, a stationary calibration target, and an attached tilt sensor.

View Article and Find Full Text PDF

Super resolution (SR) for real-life video sequences is a challenging problem due to complex nature of the motion fields. In this paper, a novel blind SR method is proposed to improve the spatial resolution of video sequences, while the overall point spread function of the imaging system, motion fields, and noise statistics are unknown. To estimate the blur(s), first, a nonuniform interpolation SR method is utilized to upsample the frames, and then, the blur(s) is(are) estimated through a multi-scale process.

View Article and Find Full Text PDF

A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates.

View Article and Find Full Text PDF

This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs. The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image.

View Article and Find Full Text PDF

Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required.

View Article and Find Full Text PDF

The design, development, and field-test results of a visible-band, folded, multiresolution, adaptive computational imaging system based on the Processing Arrays of Nyquist-limited Observations to Produce a Thin Electro-optic Sensor (PANOPTES) concept is presented. The architectural layout that enables this imager to be adaptive is described, and the control system that ensures reliable field-of-view steering for precision and accuracy in subpixel target registration is explained. A digital superresolution algorithm introduced to obtain high-resolution imagery from field tests conducted in both nighttime and daytime imaging conditions is discussed.

View Article and Find Full Text PDF

The performance of uniform and nonuniform detector arrays for application to the PANOPTES (processing arrays of Nyquist-limited observations to produce a thin electro-optic sensor) flat camera design is analyzed for measurement noise environments including quantization noise and Gaussian and Poisson processes. Image data acquired from a commercial camera with 8 bit and 14 bit output options are analyzed, and estimated noise levels are computed. Noise variances estimated from the measurement values are used in the optimal linear estimators for superresolution image reconstruction.

View Article and Find Full Text PDF

A framework is proposed for optimal joint design of the optical and reconstruction filters in a computational imaging system. First, a technique for the design of a physically unconstrained system is proposed whose performance serves as a universal bound on any realistic computational imaging system. Increasing levels of constraints are then imposed to emulate a physically realizable optical filter.

View Article and Find Full Text PDF

A thin, agile multiresolution, computational imaging sensor architecture, termed PANOPTES (processing arrays of Nyguist-limited observations to produce a thin electro-optic sensor), which utilizes arrays of microelectromechanical mirrors to adaptively redirect the fields of view of multiple low-resolution subimagers, is described. An information theory-based algorithm adapts the system and restores the image. The modulation transfer function (MTF) effects of utilizing micromirror arrays to steering imaging systems are analyzed, and computational methods for combining data collected from systems with differing MTFs are presented.

View Article and Find Full Text PDF

Algorithms that use optical system diversity to improve multiplexed image reconstruction from multiple low-resolution images are analyzed and demonstrated. Compared with systems using identical imagers, systems using additional lower-resolution imagers can have improved accuracy and computation. The diverse system is not sensitive to boundary conditions and can take full advantage of improvements that decrease noise and allow an increased number of bits per pixel to represent spatial information in a scene.

View Article and Find Full Text PDF