Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.

Download full-text PDF

Source
http://dx.doi.org/10.1068/p6868DOI Listing

Publication Analysis

Top Keywords

motion parallax
8
interaction space
8
binocular monocular
8
nearest led
8
binocular viewing
8
estimates depth
8
monocular viewing
8
depth
7
binocular
5
depth interval
4

Similar Publications

Holographic displays have the potential to reconstruct natural light field information, making them highly promising for applications in augmented reality (AR), head-up displays (HUD), and new types of transparent three-dimensional (3D) displays. However, current spatial light modulators (SLMs) are constrained by pixel size and resolution, limiting display size. Additionally, existing holographic displays have narrow viewing angles due to device diffraction limits, algorithms, and optical configurations.

View Article and Find Full Text PDF

High-quality light-field generation of real scenes based on view synthesis remains a significant challenge in three-dimensional (3D) light-field displays. Recent advances in neural radiance fields have greatly enhanced light-field generation. However, challenges persist in synthesizing high-quality cylindrical viewpoints within a short time.

View Article and Find Full Text PDF

Objects project different images when viewed from varying locations, but the visual system can correct perspective distortions and identify objects across viewpoints. This study investigated the conditions under which the visual system allocates computational resources to construct view-invariant, extraretinal representations, focusing on planar symmetry. When a symmetrical pattern lies on a plane, its symmetry in the retinal image is degraded by perspective.

View Article and Find Full Text PDF

Relating visual and pictorial space: Integration of binocular disparity and motion parallax.

J Vis

December 2024

BioMotionLab, Centre for Vision Research and Department of Biology, York University, Toronto, Ontario, Canada.

Traditionally, perceptual spaces are defined by the medium through which the visual environment is conveyed (e.g., in a physical environment, through a picture, or on a screen).

View Article and Find Full Text PDF

Sensory neurons often encode multisensory or multimodal signals. For example, many medial superior temporal (MST) neurons are tuned to heading direction of self-motion based on visual (optic flow) signals and vestibular signals. Middle temporal (MT) cortical neurons are tuned to object depth from signals of two visual modalities: motion parallax and binocular disparity.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!