Tissue phenotyping is a fundamental computational pathology (CPath) task in learning objective characterizations of histopathologic biomarkers in anatomic pathology. However, whole-slide imaging (WSI) poses a complex computer vision problem in which the large-scale image resolutions of WSIs and the enormous diversity of morphological phenotypes preclude large-scale data annotation. Current efforts have proposed using pretrained image encoders with either transfer learning from natural image datasets or self-supervised pretraining on publicly-available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using over 100 million tissue patches from over 100,000 diagnostic haematoxylin and eosin-stained WSIs across 20 major tissue types, and evaluated on 33 representative CPath clinical tasks in CPath of varying diagnostic difficulties. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree code classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient AI models that can generalize and transfer to a gamut of diagnostically-challenging tasks and clinical workflows in anatomic pathology.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10491320PMC

Publication Analysis

Top Keywords

general-purpose self-supervised
8
self-supervised model
8
computational pathology
8
anatomic pathology
8
tissue types
8
pathology
5
tissue
5
cpath
5
model computational
4
pathology tissue
4

Similar Publications

Cognitive scientists believe that adaptable intelligent agents like humans perform spatial reasoning tasks by learned causal mental simulation. The problem of learning these simulations is called predictive world modeling. We present the first framework for a learning open-vocabulary predictive world model (OV-PWM) from sensor observations.

View Article and Find Full Text PDF

A multimodal generative AI copilot for human pathology.

Nature

October 2024

Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.

Computational pathology has witnessed considerable progress in the development of both task-specific predictive models and task-agnostic self-supervised vision encoders. However, despite the explosive growth of generative artificial intelligence (AI), there have been few studies on building general-purpose multimodal AI assistants and copilots tailored to pathology. Here we present PathChat, a vision-language generalist AI assistant for human pathology.

View Article and Find Full Text PDF

Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale.

View Article and Find Full Text PDF

Deep learning-based neuroimaging pipelines for acute stroke typically rely on image registration, which not only increases computation but also introduces a point of failure. In this paper, we propose a general-purpose contrastive self-supervised learning method that converts a convolutional deep neural network designed for registered images to work on a different input domain, i.e.

View Article and Find Full Text PDF

Biometric contrastive learning for data-efficient deep learning from electrocardiographic images.

J Am Med Inform Assoc

April 2024

Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University, New Haven, CT, 06510, United States.

Objective: Artificial intelligence (AI) detects heart disease from images of electrocardiograms (ECGs). However, traditional supervised learning is limited by the need for large amounts of labeled data. We report the development of Biometric Contrastive Learning (BCL), a self-supervised pretraining approach for label-efficient deep learning on ECG images.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!