Although most arguments explaining the predominance of polymorphic color vision in platyrrhine monkeys are linked to the advantage of trichromacy over dichromacy for foraging for ripe fruits, little information exists on the relationship between nutritional reward and performance in fruit detection with different types of color vision. The principal reward of most fruits is sugar, and thus it seems logical to investigate whether fruit coloration provides a long-distance sensory cue to primates that correlates with sugar content. Here we test the hypothesis that fruit detection performance via trichromatic color vision phenotypes provides better information regarding sugar concentration than dichromatic phenotypes (i.e., is a color vision phenotype with sufficient red-green (RG) differentiation necessary to "reveal" the concentration of major sugars in fruits?). Accordingly, we studied the fruit foraging behavior of Ateles geoffroyi by measuring both the reflectance spectra and the concentrations of major sugars in the consumed fruits. We modeled detection performance with different color phenotypes. Our results provide some support for the hypothesis. The yellow-blue (YB) color signal, which is the only one available to dichromats, was not significantly related to sugar concentration. The RG color vision signal, which is present only in trichromats, was significantly correlated with sugar content, but only when the latter was defined by glucose. There was in fact a consistent negative relationship between fruit detection performance and sucrose concentration, although this was not significant for the 430 nm and 550 nm phenotypes. The regular trichromatic phenotypes (430 nm, 533 nm, and 565 nm) showed higher correlations between fruit performance and glucose concentration than the other two trichromatic phenotypes. Our study documents a trichromatic foraging advantage in terms of fruit quality, and suggests that trichromatic color vision is advantageous over dichromatic color vision for detecting sugar-rich fruits.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1002/ajp.20196 | DOI Listing |
Taiwan J Ophthalmol
November 2024
Sirindhorn International Institute of Technology, Thammasat University, Bangkok, Thailand.
Recent advances of artificial intelligence (AI) in retinal imaging found its application in two major categories: discriminative and generative AI. For discriminative tasks, conventional convolutional neural networks (CNNs) are still major AI techniques. Vision transformers (ViT), inspired by the transformer architecture in natural language processing, has emerged as useful techniques for discriminating retinal images.
View Article and Find Full Text PDFOphthalmol Sci
November 2024
Liverpool Ocular Oncology Research Group, Department of Eye and Vision Science, Institute of Life Course and Medical Sciences (ILCaMS), University of Liverpool, Liverpool, United Kingdom.
Purpose: Testing the validity of a self-supervised deep learning (DL) model, RETFound, for use on posterior uveal (choroidal) melanoma (UM) and nevus differentiation.
Design: Case-control study.
Subjects: Ultrawidefield fundoscopy images, both color and autofluorescence, were used for this study, obtained from 4255 patients seen at the Liverpool Ocular Oncology Center between 1995 and 2020.
Brain
January 2025
Faculty of Social and Behavioural Sciences, University of Amsterdam, 1001 NK, Amsterdam, The Netherlands.
Mid-level visual processing represents a crucial stage between basic sensory input and higher-level object recognition. The conventional model posits that fundamental visual qualities like color and motion are processed in specialized, retinotopic brain regions (e.g.
View Article and Find Full Text PDFSci Total Environ
January 2025
Molecular Biology, Genetics and Bioengineering Program, Faculty of Engineering and Natural Sciences, Sabancı University, Tuzla, Istanbul, Türkiye; USDA/ARS/WRRC, Invasive Species and Pollinator Health Research Unit, Davis, CA 95616, USA. Electronic address:
Neonicotinoid pesticide use has increased around the world despite accumulating evidence of their potential detrimental sub-lethal effects on the behaviour and physiology of bees, and its contribution to the global decline in bee health. Whilst flower colour is considered as one of the most important signals for foraging honey bees (Apis mellifera), the effects of pesticides on colour vision and memory retention in a natural setting remain unknown. We trained free flying honey bee foragers by presenting artificial yellow flower feeder, to an unscented artificial flower patch with 6 different flower colours to investigate if sub-lethal levels of imidacloprid would disrupt the acquired association made between the yellow flower colour from the feeder and food reward.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!