The diversity of color vision systems found in extant vertebrates suggests that different evolutionary selection pressures have driven specializations in photoreceptor complement and visual pigment spectral tuning appropriate for an animal's behavior, habitat, and life history. Aquatic vertebrates in particular show high variability in chromatic vision and have become important models for understanding the role of color vision in prey detection, predator avoidance, and social interactions. In this study, we examined the capacity for chromatic vision in elasmobranch fishes, a group that have received relatively little attention to date. We used microspectrophotometry to measure the spectral absorbance of the visual pigments in the outer segments of individual photoreceptors from several ray and shark species, and we sequenced the opsin mRNAs obtained from the retinas of the same species, as well as from additional elasmobranch species. We reveal the phylogenetically widespread occurrence of dichromatic color vision in rays based on two cone opsins, RH2 and LWS. We also confirm that all shark species studied to date appear to be cone monochromats but report that in different species the single cone opsin may be of either the LWS or the RH2 class. From this, we infer that cone monochromacy in sharks has evolved independently on multiple occasions. Together with earlier discoveries in secondarily aquatic marine mammals, this suggests that cone-based color vision may be of little use for large marine predators, such as sharks, pinnipeds, and cetaceans.

Download full-text PDF

Source
http://dx.doi.org/10.1093/molbev/msz269DOI Listing

Publication Analysis

Top Keywords

color vision
16
chromatic vision
8
shark species
8
vision
6
species
5
visual opsin
4
opsin diversity
4
diversity sharks
4
sharks rays
4
rays diversity
4

Similar Publications

Recent advances of artificial intelligence (AI) in retinal imaging found its application in two major categories: discriminative and generative AI. For discriminative tasks, conventional convolutional neural networks (CNNs) are still major AI techniques. Vision transformers (ViT), inspired by the transformer architecture in natural language processing, has emerged as useful techniques for discriminating retinal images.

View Article and Find Full Text PDF

Differentiating Choroidal Melanomas and Nevi Using a Self-Supervised Deep Learning Model Applied to Clinical Fundoscopy Images.

Ophthalmol Sci

November 2024

Liverpool Ocular Oncology Research Group, Department of Eye and Vision Science, Institute of Life Course and Medical Sciences (ILCaMS), University of Liverpool, Liverpool, United Kingdom.

Purpose: Testing the validity of a self-supervised deep learning (DL) model, RETFound, for use on posterior uveal (choroidal) melanoma (UM) and nevus differentiation.

Design: Case-control study.

Subjects: Ultrawidefield fundoscopy images, both color and autofluorescence, were used for this study, obtained from 4255 patients seen at the Liverpool Ocular Oncology Center between 1995 and 2020.

View Article and Find Full Text PDF

Mid-level visual processing represents a crucial stage between basic sensory input and higher-level object recognition. The conventional model posits that fundamental visual qualities like color and motion are processed in specialized, retinotopic brain regions (e.g.

View Article and Find Full Text PDF

Sub-lethal pesticide exposure interferes with honey bee memory of learnt colours.

Sci Total Environ

January 2025

Molecular Biology, Genetics and Bioengineering Program, Faculty of Engineering and Natural Sciences, Sabancı University, Tuzla, Istanbul, Türkiye; USDA/ARS/WRRC, Invasive Species and Pollinator Health Research Unit, Davis, CA 95616, USA. Electronic address:

Neonicotinoid pesticide use has increased around the world despite accumulating evidence of their potential detrimental sub-lethal effects on the behaviour and physiology of bees, and its contribution to the global decline in bee health. Whilst flower colour is considered as one of the most important signals for foraging honey bees (Apis mellifera), the effects of pesticides on colour vision and memory retention in a natural setting remain unknown. We trained free flying honey bee foragers by presenting artificial yellow flower feeder, to an unscented artificial flower patch with 6 different flower colours to investigate if sub-lethal levels of imidacloprid would disrupt the acquired association made between the yellow flower colour from the feeder and food reward.

View Article and Find Full Text PDF

Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation.

Sensors (Basel)

December 2024

Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan.

Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!