Successful goal-directed actions require constant fine-tuning of the motor system. This fine-tuning is thought to rely on an implicit adaptation process that is driven by sensory prediction errors (e.g., where you see your hand after reaching vs. where you expected it to be). Individuals with low vision experience challenges with visuomotor control, but whether low vision disrupts motor adaptation is unknown. To explore this question, we assessed individuals with low vision and matched controls with normal vision on a visuomotor task designed to isolate implicit adaptation. We found that low vision was associated with attenuated implicit adaptation only for small visual errors, but not for large visual errors. This result highlights important constraints underlying how low-fidelity visual information is processed by the sensorimotor system to enable successful implicit adaptation.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10512469 | PMC |
http://dx.doi.org/10.1162/jocn_a_01969 | DOI Listing |
J Intellect Disabil Res
January 2025
Institute of Public Health, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
Background: People with intellectual disabilities (IDs) require more vision care but encounter considerable challenges during eye examinations. Specialised clinics established specifically for people with IDs are generally limited. This study aims to evaluate primary family caregivers' willingness to pay (WTP) for specialised ophthalmology services designed for people with IDs.
View Article and Find Full Text PDFJ Clin Med
December 2024
The David J Apple Center for Vision Research, Department of Ophthalmology, Heidelberg University Eye Clinic, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany.
This laboratory study aims to assess the effects of misaligning different trifocal intraocular lenses (IOLs) under varying spectral and corneal spherical aberration (SA) conditions. With an IOL metrology device under monochromatic and polychromatic conditions, the following models were studied: AT ELANA 841P, AT LISA Tri 839MP, FineVision HP POD F, Acrysof IQ PanOptix, and Tecnis Synergy ZFR00V. The SA was simulated using an aberration-free and average-SA cornea.
View Article and Find Full Text PDFJ Clin Med
December 2024
Department of Neurosurgery, University Hospital Leipzig, 04103 Leipzig, Germany.
Sphenoid wing meningiomas (SWM) frequently compress structures of the optic pathway, resulting in significant visual dysfunction characterized by vision loss and visual field deficits, which profoundly impact patients' quality of life (QoL), daily activities, and independence. The objective of this study was to assess the impact of SWM surgery on patient-reported outcome measures (PROMs) regarding postoperative visual function. The Visual Function Score Questionnaire (VFQ-25) is a validated tool designed to assess the impact of visual impairment on quality of life.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Phillip M. Drayer Electrical Engineering Department, Lamar University, Beaumont, TX 77705, USA.
Automated ultrasonic testing (AUT) is a critical tool for infrastructure evaluation in industries such as oil and gas, and, while skilled operators manually analyze complex AUT data, artificial intelligence (AI)-based methods show promise for automating interpretation. However, improving the reliability and effectiveness of these methods remains a significant challenge. This study employs the Segment Anything Model (SAM), a vision foundation model, to design an AI-assisted tool for weld defect detection in real-world ultrasonic B-scan images.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Faculty of Science and Technology, Keio University, Yokohama 223-8522, Japan.
Person identification is a critical task in applications such as security and surveillance, requiring reliable systems that perform robustly under diverse conditions. This study evaluates the Vision Transformer (ViT) and ResNet34 models across three modalities-RGB, thermal, and depth-using datasets collected with infrared array sensors and LiDAR sensors in controlled scenarios and varying resolutions (16 × 12 to 640 × 480) to explore their effectiveness in person identification. Preprocessing techniques, including YOLO-based cropping, were employed to improve subject isolation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!