When a black room (a room painted black and filled with objects painted black) is viewed through a veiling luminance, how does it appear? Prior work on black rooms and white rooms suggests the room will appear white because mutual illumination in the high-reflectance white room lowers image contrast, and the veil also lowers image contrast. Other work reporting high lightness constancy for three-dimensional scenes viewed through a veil suggests the veil will not make the room appear lighter. Because mutual illumination also modifies the pattern of luminance gradients across the room while the veil does not, we were able to tease apart local luminance gradients from overall luminance contrast by presenting observers with a black room viewed through a veiling luminance.
View Article and Find Full Text PDFWe examined how well human observers can discriminate the density of surfaces in two halves of a rotating three-dimensional cluttered sphere. The observer's task was to compare the density of the front versus back half or the left versus right half. We measured how the bias and sensitivity in judging the denser half depended on the level of occlusion and on the area and density of the surfaces in the clutter.
View Article and Find Full Text PDFIn three-dimensional (3-D) cluttered scenes such as foliage, deeper surfaces often are more shadowed and hence darker, and so depth and luminance often have negative covariance. We examined whether the sign of depth-luminance covariance plays a role in depth perception in 3-D clutter. We compared scenes rendered with negative and positive depth-luminance covariance where positive covariance means that deeper surfaces are brighter and negative covariance means deeper surfaces are darker.
View Article and Find Full Text PDFWe perform two psychophysics experiments to investigate a viewer's ability to detect defocus in video; in particular, the defocus that arises in video during motion in depth when the camera does not maintain sharp focus throughout the motion. The first experiment demonstrates that blur sensitivity during viewing is affected by the speed at which the target moves towards the camera. The second experiment measures a viewer's ability to notice momentary defocus and shows that the threshold of blur detection in arc minutes decreases significantly as the duration of the blur increases.
View Article and Find Full Text PDFObjects such as trees, shrubs, and tall grass consist of thousands of small surfaces that are distributed over a three-dimensional (3D) volume. To perceive the depth of surfaces within 3D clutter, a visual system can use binocular stereo and motion parallax. However, such parallax cues are less reliable in 3D clutter because surfaces tend to be partly occluded.
View Article and Find Full Text PDFThe image blur and binocular disparity of a 3D scene point both increase with distance in depth away from fixation. Perceived depth from disparity has been studied extensively and is known to be most precise near fixation. Perceived depth from blur is much less well understood.
View Article and Find Full Text PDFThe human visual system has a remarkable ability to perceive three-dimensional (3-D) surface shape from shading and specular reflections. This paper presents two experiments that examined the perception of local qualitative shape under various conditions. Surfaces were rendered using standard computer graphics models of matte, glossy, and mirror reflectance and were viewed from a small oblique angle to avoid occluding contour shape cues.
View Article and Find Full Text PDFThree-dimensional (3D) cluttered scenes consist of a large number of small surfaces distributed randomly in a 3D view volume. The canonical example is the foliage of a tree or bush. 3D cluttered scenes are challenging for vision tasks such as object recognition and depth perception because most surfaces or objects are only partly visible.
View Article and Find Full Text PDFJ Opt Soc Am A Opt Image Sci Vis
September 2005
Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step.
View Article and Find Full Text PDF