Accurate polyp segmentation plays a critical role from colonoscopy images in the diagnosis and treatment of colorectal cancer. While deep learning-based polyp segmentation models have made significant progress, they often suffer from performance degradation when applied to unseen target domain datasets collected from different imaging devices. To address this challenge, unsupervised domain adaptation (UDA) methods have gained attention by leveraging labeled source data and unlabeled target data to reduce the domain gap. However, existing UDA methods primarily focus on capturing class-wise representations, neglecting domain-wise representations. Additionally, uncertainty in pseudo labels could hinder the segmentation performance. To tackle these issues, we propose a novel Domain-interactive Contrastive Learning and Prototype-guided Self-training (DCL-PS) framework for cross-domain polyp segmentation. Specifically, domaininteractive contrastive learning (DCL) with a domain-mixed prototype updating strategy is proposed to discriminate class-wise feature representations across domains. Then, to enhance the feature extraction ability of the encoder, we present a contrastive learning-based cross-consistency training (CL-CCT) strategy, which is imposed on both the prototypes obtained by the outputs of the main decoder and perturbed auxiliary outputs. Furthermore, we propose a prototype-guided self-training (PS) strategy, which dynamically assigns a weight for each pixel during selftraining, filtering out unreliable pixels and improving the quality of pseudo-labels. Experimental results demonstrate the superiority of DCL-PS in improving polyp segmentation performance in the target domain. The code will be released at https://github.com/taozh2017/DCLPS.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2024.3443262DOI Listing

Publication Analysis

Top Keywords

polyp segmentation
20
contrastive learning
12
prototype-guided self-training
12
domain-interactive contrastive
8
learning prototype-guided
8
cross-domain polyp
8
target domain
8
uda methods
8
segmentation performance
8
segmentation
6

Similar Publications

This dataset contains demographic, morphological and pathological data, endoscopic images and videos of 191 patients with colorectal polyps. Morphological data is included based on the latest international gastroenterology classification references such as Paris, Pit and JNET classification. Pathological data includes the diagnosis of the polyps including Tubular, Villous, Tubulovillous, Hyperplastic, Serrated, Inflammatory and Adenocarcinoma with Dysplasia Grade & Differentiation.

View Article and Find Full Text PDF

The optimal labelling method for artificial intelligence-assisted polyp detection in colonoscopy.

J Formos Med Assoc

December 2024

Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan. Electronic address:

Background: The methodology in colon polyp labeling in establishing database for ma-chine learning is not well-described and standardized. We aimed to find out the best annotation method to generate the most accurate model in polyp detection.

Methods: 3542 colonoscopy polyp images were obtained from endoscopy database of a tertiary medical center.

View Article and Find Full Text PDF

Deep learning models are used to minimize the number of polyps that goes unnoticed by the experts and to accurately segment the detected polyps during interventions. Although state-of-the-art models are proposed, it remains a challenge to define representations that are able to generalize well and that mediate between capturing low-level features and higher-level semantic details without being redundant. Another challenge with these models is that they are computation and memory intensive, which can pose a problem with real-time applications.

View Article and Find Full Text PDF

Endometrial carcinomas in the isthmus are called lower uterine segment (LUS) cancers. It is a rare location among uterine cancers and is known to be associated with Lynch syndrome, which tends to occur at a young age. Preoperative diagnosis may be difficult due to its anatomical location, and the prognosis is poorer than that of uterine cancer in general.

View Article and Find Full Text PDF

Convolutional neural networks (CNNs) are well established in handling local features in visual tasks; yet, they falter in managing complex spatial relationships and long-range dependencies that are crucial for medical image segmentation, particularly in identifying pathological changes. While vision transformer (ViT) excels in addressing long-range dependencies, their ability to leverage local features remains inadequate. Recent ViT variants have merged CNNs to improve feature representation and segmentation outcomes, yet challenges with limited receptive fields and precise feature representation persist.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!