Purpose: To determine whether monocularly- and binocularly-induced spherical and meridional blur and aniseikonia had similar effects on stereopsis thresholds.
Methods: Twelve participants with normal binocular vision viewed McGill modified random dot stereograms to determine stereoacuities in a four-alternative forced-choice procedure. Astigmatism was induced by placing trial lenses in front of the eyes. Twenty-three conditions were used, consisting of zero (no lens), +1 D and +2 D spheres and cylinders at axes 180, 45 and 90 in front of the right eye, and the following binocular combinations of both lens powers: R × 180/L × 180, R × 45/L × 45, R × 90/L × 90, R sphere/L sphere, R × 180/L × 90, R × 45/L × 135, R × 90/L × 180. Aniseikonia was induced by placing magnifying lenses in front of the eyes. Twenty-three conditions were used, consisting of zero, 6% and 12% overall magnification and both magnifications at axes 180, 45 and 90 in front of the right eye only, and the following binocular combinations using 3% and 6% lenses: R × 90/L × 90, R × 45/L × 45, R × 180/L × 180, R overall/L overall, R × 90/L × 180, R × 45/L × 135, and R × 180/L × 90.
Results: Stereopsis losses for binocular blur effects with parallel axes (non-anisometropic) were the same as for monocular blur effects of the same axes, and these were strongly dependent on axis (spherical blur and ×90 had the greatest effects). Binocular blur effects with orthogonal axes had greater effects than with parallel axes, with the axis combination of the former having no effect (e.g. R × 90/L × 180 was similar to R × 45/L × 135). For induced aniseikonia, splitting the magnifications between the eyes improved stereopsis slightly, and the effects were not dependent on axis.
Conclusion: Binocular blur affects stereopsis similarly to monocular meridional blur if axes in the two eyes are parallel, whereas the effect is greater if the axes are orthogonal. In meridional aniseikonia, splitting magnification between the right and left lenses produces a small improvement in stereopsis that is independent of axis direction and right/left combination.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/opo.12724 | DOI Listing |
Invest Ophthalmol Vis Sci
October 2024
Spencer Center for Vision Research, Byers Eye Institute at Stanford University, Palo Alto, California, United States.
Purpose: Concussed adolescents often report visual symptoms, especially for moving targets, but the mechanisms resulting in oculomotor deficits remain unclear. We objectively measured accommodative and vergence responses to a moving target in concussed adolescents and controls.
Methods: Thirty-two symptomatic concussed participants (mean age, 14.
J Vis
October 2024
McGill Vision Research, Department of Ophthalmology and Visual Sciences, McGill University; McGill University Health Center, Montreal, QC, Canada.
Life (Basel)
September 2024
Departament d'Òptica i Optometria (DOO), Universitat Politècnica de Catalunya (UPC), Campus de Terrassa, Edifici TR8, C.Violinista Vellsolà, 37, 08222 Terrassa, Spain.
Fusional vergence range tests are commonly used in optometric practice. The aim of this study was to investigate the possible contribution of CA/C, AC/A, and proximal cues (PCT) to the magnitude and presence of blur and recovery during the measurement of fusional vergence ranges and to determine whether the occurrence of blur is influenced by these vergence and accommodation cues. A total of 27 participants with normal binocular vision were included and AC/A, CA/C, and PCT ratios were evaluated.
View Article and Find Full Text PDFSensors (Basel)
August 2024
School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China.
In this paper, we proposed Mix-VIO, a monocular and binocular visual-inertial odometry, to address the issue where conventional visual front-end tracking often fails under dynamic lighting and image blur conditions. Mix-VIO adopts a hybrid tracking approach, combining traditional handcrafted tracking techniques with Deep Neural Network (DNN)-based feature extraction and matching pipelines. The system employs deep learning methods for rapid feature point detection, while integrating traditional optical flow methods and deep learning-based sparse feature matching methods to enhance front-end tracking performance under rapid camera motion and environmental illumination changes.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!