Publications by authors named "Kiriakos N Kutulakos"

We consider the problem of estimating surface normals of a scene with spatially varying, general bidirectional reflectance distribution functions (BRDFs) observed by a static camera under varying distant illuminations. Unlike previous approaches that rely on continuous optimization of surface normals, we cast the problem as a discrete search problem over a set of finely discretized surface normals. In this setting, we show that the expensive processes can be precomputed in a scene-independent manner, resulting in accelerated inference.

View Article and Find Full Text PDF

Night beats with alternating current (AC) illumination. By passively sensing this beat, we reveal new scene information which includes: the type of bulbs in the scene, the phases of the electric grid up to city scale, and the light transport matrix. This information yields unmixing of reflections and semi-reflections, nocturnal high dynamic range, and scene rendering with bulbs not observed during acquisition.

View Article and Find Full Text PDF

We consider the problem of deliberately manipulating the direct and indirect light flowing through a time-varying, general scene in order to simplify its visual analysis. Our approach rests on a crucial link between stereo geometry and light transport: while direct light always obeys the epipolar geometry of a projector-camera pair, indirect light overwhelmingly does not. We show that it is possible to turn this observation into an imaging method that analyzes light transport in real time in the optical domain, prior to acquisition.

View Article and Find Full Text PDF
Light-efficient photography.

IEEE Trans Pattern Anal Mach Intell

November 2011

In this paper, we consider the problem of imaging a scene with a given depth of field at a given exposure level in the shortest amount of time possible. We show that by 1) collecting a sequence of photos and 2) controlling the aperture, focus, and exposure time of each photo individually, we can span the given depth of field in less total time than it takes to expose a single narrower-aperture photo. Using this as a starting point, we obtain two key results.

View Article and Find Full Text PDF
Dynamic Refraction Stereo.

IEEE Trans Pattern Anal Mach Intell

August 2011

In this paper we consider the problem of reconstructing the 3D position and surface normal of points on an unknown, arbitrarily-shaped refractive surface. We show that two viewpoints are sufficient to solve this problem in the general case, even if the refractive index is unknown. The key requirements are 1) knowledge of a function that maps each point on the two image planes to a known 3D point that refracts to it, and 2) light is refracted only once.

View Article and Find Full Text PDF

In this paper, we consider the problem of estimating the spatiotemporal alignment between N unsynchronized video sequences of the same dynamic 3D scene, captured from distinct viewpoints. Unlike most existing methods, which work for N = 2 and rely on a computationally intensive search in the space of temporal alignments, we present a novel approach that reduces the problem for general N to the robust estimation of a single line in IR(N). This line captures all temporal relations between the sequences and can be computed without any prior knowledge of these relations.

View Article and Find Full Text PDF

We describe a geometric-flow-based algorithm for computing a dense oversegmentation of an image, often referred to as superpixels. It produces segments that, on one hand, respect local image boundaries, while, on the other hand, limiting undersegmentation through a compactness constraint. It is very fast, with complexity that is approximately linear in image size, and can be applied to megapixel sized images with high superpixel densities in a matter of minutes.

View Article and Find Full Text PDF

This paper considers the problem of reconstructing visually realistic 3D models of dynamic semitransparent scenes, such as fire, from a very small set of simultaneous views (even two). We show that this problem is equivalent to a severely underconstrained computerized tomography problem, for which traditional methods break down. Our approach is based on the observation that every pair of photographs of a semitransparent scene defines a unique density field, called a Density Sheet, that 1) concentrates all its density on one connected, semitransparent surface, 2) reproduces the two photos exactly, and 3) is the most spatially compact density field that does so.

View Article and Find Full Text PDF