Objective: The effect of chloroquine toxicity on color vision is unclear. The authors identified the color defects seen in chloroquine retinopathy and determined the sensitivity and specificity of clinical color vision tests for detecting the presence of previously diagnosed chloroquine retinopathy.
Design: Case-control study.
Participants: Chloroquine retinopathy was defined using previously published criteria. Data from 30 patients with retinopathy and 25 patients using chloroquine but with no evidence of retinal toxicity were collected.
Methods: All patients were tested with the following six clinical color vision tests: Ishihara, Farnsworth D-15, and Adams Desaturated-15 (Dsat-15), City University 2nd Edition (CU), Standard Pseudoisochromatic Plates Part 2 (SPP-2), and American Optical Hardy Rand Rittler (AO HRR).
Main Outcome Measures: The number of failures was determined for each test. The types of color vision defects were classified as blue-yellow (BY), red-green (RG), or mixed RG and BY (mixed).
Results: Of the 30 patients with retinopathy, 28 (93.3%) of 30 patients failed at least 1 color vision test, demonstrating predominantly mixed defects. Five (25%) of 25 of the control subjects failed at least 1 test, and these defects were predominantly BY. The sensitivity and specificity of the tests are as follows: SPP-2 (93.3%, 88%), AO HRR (76.7%, 88%), Ishihara (43.3%, 96%), Dsat-15 (33.3%, 84%), D-15 (16.7%, 96%), and CU (20%, 92%).
Conclusions: Color vision can be affected by chloroquine and should be tested routinely with a color vision test designed to detect both mild BY and protan RG defects to maximize sensitivity for toxicity. The SPP-2 and AO HRR are two tests that meet these criteria. The Ishihara has a low sensitivity, as do the D-15 tests and CU. All of the tests have similar specificity for chloroquine toxicity. If color vision defects are detected in patients at risk of developing chloroquine retinopathy, additional testing is indicated to rule out toxicity.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/S0161-6420(99)90338-X | DOI Listing |
Taiwan J Ophthalmol
November 2024
Sirindhorn International Institute of Technology, Thammasat University, Bangkok, Thailand.
Recent advances of artificial intelligence (AI) in retinal imaging found its application in two major categories: discriminative and generative AI. For discriminative tasks, conventional convolutional neural networks (CNNs) are still major AI techniques. Vision transformers (ViT), inspired by the transformer architecture in natural language processing, has emerged as useful techniques for discriminating retinal images.
View Article and Find Full Text PDFOphthalmol Sci
November 2024
Liverpool Ocular Oncology Research Group, Department of Eye and Vision Science, Institute of Life Course and Medical Sciences (ILCaMS), University of Liverpool, Liverpool, United Kingdom.
Purpose: Testing the validity of a self-supervised deep learning (DL) model, RETFound, for use on posterior uveal (choroidal) melanoma (UM) and nevus differentiation.
Design: Case-control study.
Subjects: Ultrawidefield fundoscopy images, both color and autofluorescence, were used for this study, obtained from 4255 patients seen at the Liverpool Ocular Oncology Center between 1995 and 2020.
Brain
January 2025
Faculty of Social and Behavioural Sciences, University of Amsterdam, 1001 NK, Amsterdam, The Netherlands.
Mid-level visual processing represents a crucial stage between basic sensory input and higher-level object recognition. The conventional model posits that fundamental visual qualities like color and motion are processed in specialized, retinotopic brain regions (e.g.
View Article and Find Full Text PDFSci Total Environ
January 2025
Molecular Biology, Genetics and Bioengineering Program, Faculty of Engineering and Natural Sciences, Sabancı University, Tuzla, Istanbul, Türkiye; USDA/ARS/WRRC, Invasive Species and Pollinator Health Research Unit, Davis, CA 95616, USA. Electronic address:
Neonicotinoid pesticide use has increased around the world despite accumulating evidence of their potential detrimental sub-lethal effects on the behaviour and physiology of bees, and its contribution to the global decline in bee health. Whilst flower colour is considered as one of the most important signals for foraging honey bees (Apis mellifera), the effects of pesticides on colour vision and memory retention in a natural setting remain unknown. We trained free flying honey bee foragers by presenting artificial yellow flower feeder, to an unscented artificial flower patch with 6 different flower colours to investigate if sub-lethal levels of imidacloprid would disrupt the acquired association made between the yellow flower colour from the feeder and food reward.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!