Significance: Clinicians who administer the Farnsworth-Munsell D-15 test need to pay attention to the quality and quantity of lighting and the time that they allow for completion of the test, and all repeat attempts need to be included in reports on compliance with color vision standards.
Purpose: The validity of the Farnsworth-Munsell D-15 has been questioned because practice may allow significantly color vision-deficient subjects to pass. In this article, we review the influence of practice and other factors that may affect the performance. These relate to both the design and the administration of the test.
Methods: We review the literature and present some calculations on limitations in the colorimetric design of the test, quantity and quality of lighting, time taken, and repeat attempts.
Results: In addition to the review of the literature, color differences and luminance differences under selected sources are calculated, and the increases in luminance clues under some sources and for protanopes are illustrated.
Conclusions: All these factors affect the outcome of the test and need specification and implementation if the test is to be applied consistently and equitably. We recommend the following: practitioners should never rely on a single color vision test regardless of the color vision standard; lighting should be Tcp '' 6500 K and Ra > 90; illuminance levels should be between 200 and 300 lux if detection of color vision deficiency is a priority or between 300 and 1000 lux if the need is to test at the level where illuminance has minimal influence on performance; illuminance should be reported; time limits should be set between 1 and 2 minutes; repeat testing (beyond the specified test and one retest) should be carried out only with authorization; and initial and repeated results should be reported. A set of test instructions to assist in the consistent application of the test is provided in the Appendix.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1097/OPX.0000000000001420 | DOI Listing |
Front Psychol
December 2024
Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States.
Introduction: While the fact that visual stimuli synthesized by Artificial Neural Networks (ANN) may evoke emotional reactions is documented, the precise mechanisms that connect the strength and type of such reactions with the ways of how ANNs are used to synthesize visual stimuli are yet to be discovered. Understanding these mechanisms allows for designing methods that synthesize images attenuating or enhancing selected emotional states, which may provide unobtrusive and widely-applicable treatment of mental dysfunctions and disorders.
Methods: The Convolutional Neural Network (CNN), a type of ANN used in computer vision tasks which models the ways humans solve visual tasks, was applied to synthesize ("dream" or "hallucinate") images with no semantic content to maximize activations of neurons in precisely-selected layers in the CNN.
J Otol
October 2024
Department of Public Health, Faculty of Medicine and Dentistry, Palacký University Olomouc, Czech Republic.
Background: Over 55 million people worldwide are living with dementia. The rate of cognitive decline increases with age, and loss of senses may be a contributing factor.
Objectives: This study aimed to analyze hearing, olfactory function, and color vision in patients with dementia.
Naturwissenschaften
January 2025
Institute for Animal Cell and Systems Biology, University of Hamburg, Martin-Luther-King Platz 3, Hamburg, 20146, Germany.
Physiological or genetic assays and computational modeling are valuable tools for understanding animals' visual discrimination capabilities. Yet sometimes, the results generated by these methods appear not to jive with other aspects of an animal's appearance or natural history, and behavioral confirmatory tests are warranted. Here we examine the peculiar case of a male jumping spider that displays red, black, white, and UV color patches during courtship despite the fact that, according to microspectrophotometry and color vision modeling, they are unlikely able to discriminate red from black.
View Article and Find Full Text PDFAnimals (Basel)
December 2024
College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China.
This study proposes an image enhancement detection technique based on Adltformer (Adaptive dynamic learning transformer) team-training with Detr (Detection transformer) to improve model accuracy in suboptimal conditions, addressing the challenge of detecting cattle in real pastures under complex lighting conditions-including backlighting, non-uniform lighting, and low light. This often results in the loss of image details and structural information, color distortion, and noise artifacts, thereby compromising the visual quality of captured images and reducing model accuracy. To train the Adltformer enhancement model, the day-to-night image synthesis (DTN-Synthesis) algorithm generates low-light image pairs that are precisely aligned with normal light images and include controlled noise levels.
View Article and Find Full Text PDFNatl Sci Rev
January 2025
Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei 230026, China.
Affordable high-resolution cameras and state-of-the-art computer vision techniques have led to the emergence of various vision-based tactile sensors. However, current vision-based tactile sensors mainly depend on geometric optics or marker tracking for tactile assessments, resulting in limited performance. To solve this dilemma, we introduce optical interference patterns as the visual representation of tactile information for flexible tactile sensors.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!