Towards annotation-efficient segmentation via image-to-image translation.

Med Image Anal

Ecole Polytechnique de Montréal, 2500 Chem. de Polytechnique, Montréal, H3T 1J4, Canada; Centre hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montréal, H2X 3E4, Canada.

Published: November 2022

An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder-decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label - healthy vs diseased scans - helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy "background" tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8-14% Dice score on the brain task and 5-8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2022.102624DOI Listing

Publication Analysis

Top Keywords

tumor segmentation
20
tumor image
12
image translation
12
segmentation
11
tumor
8
allows training
8
image
8
residual tumor
8
translation
5
annotation-efficient segmentation
4

Similar Publications

Loss of cervical lordosis (LOCL) is the most common postoperative cervical deformity. This study aimed to identify the predictors of LOCL by investigating the relationship between various factors and LOCL development after surgery for cervical spinal cord tumors. A retrospective analysis was conducted on 51 patients who underwent cervical spinal tumor resection at a single center.

View Article and Find Full Text PDF

Recent progress in deep learning (DL) is producing a new generation of tools across numerous clinical applications. Within the analysis of brain tumors in magnetic resonance imaging, DL finds applications in tumor segmentation, quantification, and classification. It facilitates objective and reproducible measurements crucial for diagnosis, treatment planning, and disease monitoring.

View Article and Find Full Text PDF

Detecting brain tumours (BT) early improves treatment possibilities and increases patient survival rates. Magnetic resonance imaging (MRI) scanning offers more comprehensive information, such as better contrast and clarity, than any alternative scanning process. Manually separating BTs from several MRI images gathered in medical practice for cancer analysis is challenging and time-consuming.

View Article and Find Full Text PDF

Objective: The objective of this study was to discuss the characteristics of intracranial extension in patients with juvenile nasopharyngeal angiofibroma (JNA) and propose and an algorithm for its management.

Methods: A retrospective chart review of all patients with JNA who underwent operations between January 2013 and January 2023 was done, and those cases with intracranial extension categorized as stage IIIb, IVa, and IVb according to the Andrews modification of the Fisch staging classification were included in the study. Data were collected about age at presentation, symptoms, radiological findings, routes of intracranial extension, therapeutic management, and follow-up.

View Article and Find Full Text PDF

Accurate diagnosis of pancreatic cancer using CT scan images is critical for early detection and treatment, potentially saving numerous lives globally. Manual identification of pancreatic tumors by radiologists is challenging and time-consuming due to the complex nature of CT scan images and variations in tumor shape, size, and location of the pancreatic tumor also make it challenging to detect and classify different types of tumors. Thus, to address this challenge we proposed a four-stage framework of computer-aided diagnosis systems.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!