Self-supervised driven consistency training for annotation efficient histopathology image analysis.

Med Image Anal

Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.

Published: January 2022

Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learning unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: (i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; (ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific unlabeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close to or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models are made available at: https://github.com/srinidhiPY/SSL_CR_Histo.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102256DOI Listing

Publication Analysis

Top Keywords

downstream tasks
8
task-specific unlabeled
8
unlabeled data
8
self-supervised
5
self-supervised driven
4
driven consistency
4
consistency training
4
training annotation
4
annotation efficient
4
efficient histopathology
4

Similar Publications

Advances in deep learning have significantly aided protein engineering in addressing challenges in industrial production, healthcare, and environmental sustainability. This review frames frequently researched problems in protein understanding and engineering from the perspective of deep learning. It provides a thorough discussion of representation methods for protein sequences and structures, along with general encoding pipelines that support both pre-training and supervised learning tasks.

View Article and Find Full Text PDF

Siamese comparative transformer-based network for unsupervised landmark detection.

PLoS One

December 2024

National Key Laboratory of Optical Field Manipulation Science and Technology, Chinese Academy of Sciences, Chengdu, China.

Landmark detection is a common task that benefits downstream computer vision tasks. Current landmark detection algorithms often train a sophisticated image pose encoder by reconstructing the source image to identify landmarks. Although a well-trained encoder can effectively capture landmark information through image reconstruction, it overlooks the semantic relationships between landmarks.

View Article and Find Full Text PDF

Central to the development of universal learning systems is the ability to solve multiple tasks without retraining from scratch when new data arrives. This is crucial because each task requires significant training time. Addressing the problem of continual learning necessitates various methods due to the complexity of the problem space.

View Article and Find Full Text PDF

Event co-occurrences for prompt-based generative event argument extraction.

Sci Rep

December 2024

School of Computer Science and Technology (School of Cyberspace Security), Xinjiang University, Urumqi, 830046, China.

Recent works have introduced prompt learning for Event Argument Extraction (EAE) since prompt-based approaches transform downstream tasks into a more consistent format with the training task of Pre-trained Language Model (PLM). This helps bridge the gap between downstream tasks and model training. However, these previous works overlooked the complex number of events and their relationships within sentences.

View Article and Find Full Text PDF

Unified Knowledge-Guided Molecular Graph Encoder with multimodal fusion and multi-task learning.

Neural Netw

December 2024

School of Computer Science, Wuhan University, Luojiashan Road, Wuchang District., Wuhan, 430072, Hubei Province, China; Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, No. 8, Yangqiaohu Avenue, Zanglong Island Development Zone, Jiangxia District, Wuhan, 2007, Hubei Province, China. Electronic address:

The remarkable success of Graph Neural Networks underscores their formidable capacity to assimilate multimodal inputs, markedly enhancing performance across a broad spectrum of domains. In the context of molecular modeling, considerable efforts have been made to enrich molecular representations by integrating data from diverse aspects. Nevertheless, current methodologies frequently compartmentalize geometric and semantic components, resulting in a fragmented approach that impairs the holistic integration of molecular attributes.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!