Purpose: To compare intermediate visual outcomes in patients previously implanted with bilateral Clareon monofocal IOLs versus bilateral Eyhance IOLs.
Methods: This was a non-interventional, single-center, examiner-masked, comparative study. Participants were cataract patients presenting at least 3 months after uncomplicated, bilateral implantation of either Clareon or Eyhance non-toric and toric IOLs. Outcomes measures included binocular distance-corrected intermediate visual acuity (DCIVA), binocular corrected distance visual acuity (CDVA), binocular best-corrected defocus curve, postoperative mean residual spherical equivalent (MRSE), and residual astigmatism.
Results: A total of 620 eyes of 310 subjects (155 subjects per group) were evaluated. The mean difference in DCIVA was 0.05 logMAR between the Eyhance and Clareon IOLs which was significant (p < 0.01), but within the 0.1 logMAR non-inferiority margin. Mean CDVA of the Clareon group was 0.01 ± 0.03 logMAR compared to 0.02 ± 0.03 logMAR of the Eyhance Group (p > 0.05). Defocus curves from +1.0 D to -3.0 D were not clinically nor statistically different between the Clareon and Eyhance groups (p > 0.05).
Conclusion: The results of this study show that bilateral implantation of Clareon monofocal IOLs and Eyhance monofocal IOLs lead to similar distance and intermediate visual outcomes.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10749565 | PMC |
http://dx.doi.org/10.2147/OPTH.S444696 | DOI Listing |
J Imaging
January 2025
Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA.
The integration of artificial intelligence into daily life significantly enhances the autonomy and quality of life of visually impaired individuals. This paper introduces the Visual Impairment Spatial Awareness (VISA) system, designed to holistically assist visually impaired users in indoor activities through a structured, multi-level approach. At the foundational level, the system employs augmented reality (AR) markers for indoor positioning, neural networks for advanced object detection and tracking, and depth information for precise object localization.
View Article and Find Full Text PDFNeuroimage
January 2025
Department of Biological and Health Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Campus de Cantoblanco, 28049 Madrid, Spain.
Will our brains get to know a new face better if we look at its external features first? Here we offer neurophysiological evidence of the relevance of external versus internal facial features for constructing new face representations, by contrasting successful face processing with a prototypical case of face agnosia. A woman with acquired prosopagnosia (E.C.
View Article and Find Full Text PDFRetin Cases Brief Rep
January 2025
Herbert Wertheim College of Medicine, Florida International University, Miami, Florida, USA.
Purpose: To report a case of drusen regression following pars plana vitrectomy with internal limiting membrane peel (ILMP) in a patient with a full-thickness macular hole and dry age-related macular degeneration (AMD).
Methods: A 67-year-old gentleman presented in April 2024 with a full-thickness macular hole in OS and intermediate dry AMD OU. The patient underwent pars plana vitrectomy, ILMP, and an injection of sulfur hexafluoride gas for macular hole repair in OS.
J Cataract Refract Surg
January 2025
University of Plymouth, Plymouth, UK.
Purpose: To evaluate visual outcomes following bilateral implantation of the RayOne EMV intraocular lens with targeted micro-monovision.
Setting: Southend Private Hospital, UK.
Design: Retrospective cohort.
Heliyon
January 2025
Centre for Tactile Internet with Human-in-the-Loop (CeTI), 6G Life, Technische Universität Dresden, Germany.
Recent research has highlighted a notable confidence bias in the haptic sense, yet its impact on learning relative to other senses remains unexplored. This online study investigated learning behaviour across visual, auditory, and haptic modalities using a probabilistic selection task on computers and mobile devices, employing dynamic and ecologically valid stimuli to enhance generalisability. We analysed reaction time as an indicator of confidence, alongside learning speed and task accuracy.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!