Traditionally, perceptual spaces are defined by the medium through which the visual environment is conveyed (e.g., in a physical environment, through a picture, or on a screen). This approach overlooks the distinct contributions of different types of visual information, such as binocular disparity and motion parallax, that transform different visual environments to yield different perceptual spaces. The current study proposes a new approach to describe different perceptual spaces based on different visual information. A geometrical model was developed to delineate the transformations imposed by binocular disparity and motion parallax, including (a) a relief depth scaling along the observer's line of sight and (b) pictorial distortions that rotate the entire perceptual space, as well as the invariant properties after these transformations, including distance, three-dimensional shape, and allocentric direction. The model was fitted to the behavioral results from two experiments, wherein the participants rotated a human figure to point at different targets in virtual reality. The pointer was displayed on a virtual frame that could differentially manipulate the availability of binocular disparity and motion parallax. The model fitted the behavioral results well, and model comparisons validated the relief scaling in the form of depth expansion and the pictorial distortions in the form of an isotropic rotation. Fitted parameters showed that binocular disparity renders distance invariant but also introduces relief depth expansion to three-dimensional objects, whereas motion parallax keeps allocentric direction invariant. We discuss the implications of the mediating effects of binocular disparity and motion parallax when connecting different perceptual spaces.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11640909 | PMC |
http://dx.doi.org/10.1167/jov.24.13.7 | DOI Listing |
Biomimetics (Basel)
December 2024
School of Mechanical Engineering and Automation, Harbin Institute of Technology Shenzhen, Shenzhen 518055, China.
Inspired by the biological eye movements of fish such as pipefish and sandlances, this paper presents a novel dynamic calibration method specifically for active stereo vision systems to address the challenges of active cameras with varying fields of view (FOVs). By integrating static calibration based on camera rotation angles with dynamic updates of extrinsic parameters, the method leverages relative pose adjustments between the rotation axis and cameras to update extrinsic parameters continuously in real-time. It facilitates epipolar rectification as the FOV changes, and enables precise disparity computation and accurate depth information acquisition.
View Article and Find Full Text PDFInt J Comput Assist Radiol Surg
December 2024
Department of Cardiothoracic Surgery, Erasmus University Medical Center, Rotterdam, The Netherlands.
Purpose: In this feasibility study, we aimed to create a dedicated pulmonary augmented reality (AR) workflow to enable a semi-automated intraoperative overlay of the pulmonary anatomy during video-assisted thoracoscopic surgery (VATS) or robot-assisted thoracoscopic surgery (RATS).
Methods: Initially, the stereoscopic cameras were calibrated to obtain the intrinsic camera parameters. Intraoperatively, stereoscopic images were recorded and a 3D point cloud was generated from these images.
PLoS One
December 2024
Centre for Vision Research, York University, Toronto, ON, Canada.
During locomotion, the visual system can factor out the motion component caused by observer locomotion from the complex target flow vector to obtain the world-relative target motion. This process, which has been termed flow parsing, is known to be incomplete, but viewing with both eyes could potentially aid in this task. Binocular disparity and binocular summation could both improve performance when viewing with both eyes.
View Article and Find Full Text PDFCont Lens Anterior Eye
December 2024
Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai 200031, China; NHC Key laboratory of Myopia and Related Eye Diseases, Key Laboratory of Myopia and Related Eye Diseases, Chinese Academy of Medical Sciences, Shanghai, 200031, China; Shanghai Research Center of Ophthalmology and Optometry, Shanghai 200031, China; Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care (20DZ2255000), China. Electronic address:
Purpose: Based on ideal outcomes of corneal topography following orthokeratology (OK), an innovative machine learning algorithm for corneal refractive therapy (CRT) was developed to investigate the precision of artificial intelligence (AI)-assisted OK lens fitting.
Methods: A total of 797 eyes that had been fitted with CRT lenses and demonstrated good lens centration with plus power ring intact in their topography were retrospectively included. A comprehensive AI model included spherical refraction, keratometry readings, eccentricity, corneal astigmatism, horizontal visible iris diameter, inferior-superior index, surface asymmetry index, surface regularity index and 8-mm chordal corneal height difference.
Sensors (Basel)
December 2024
Department of Information and Communication Engineering, Korea University of Technology and Education (KOREATECH), Cheonan-si 31253, Republic of Korea.
This paper presents a novel method to enhance ground truth disparity maps generated by Semi-Global Matching (SGM) using Maximum a Posteriori (MAP) estimation. SGM, while not producing visually appealing outputs like neural networks, offers high disparity accuracy in valid regions and avoids the generalization issues often encountered with neural network-based disparity estimation. However, SGM struggles with occlusions and textureless areas, leading to invalid disparity values.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!