An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder-decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label - healthy vs diseased scans - helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy "background" tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8-14% Dice score on the brain task and 5-8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.media.2022.102624 | DOI Listing |
Neurosurg Rev
January 2025
Department of Neurosurgery, Tianjin Medical University General Hospital, Tianjin, 300000, China.
Loss of cervical lordosis (LOCL) is the most common postoperative cervical deformity. This study aimed to identify the predictors of LOCL by investigating the relationship between various factors and LOCL development after surgery for cervical spinal cord tumors. A retrospective analysis was conducted on 51 patients who underwent cervical spinal tumor resection at a single center.
View Article and Find Full Text PDFNPJ Precis Oncol
January 2025
Athinoula A. Martinos Center for Biomedical Imaging, 149 13th St, Charlestown, MA, 02129, USA.
Recent progress in deep learning (DL) is producing a new generation of tools across numerous clinical applications. Within the analysis of brain tumors in magnetic resonance imaging, DL finds applications in tumor segmentation, quantification, and classification. It facilitates objective and reproducible measurements crucial for diagnosis, treatment planning, and disease monitoring.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Electronics, Information and Communication Engineering, Kangwon National University, Samcheok, Republic of Korea.
Detecting brain tumours (BT) early improves treatment possibilities and increases patient survival rates. Magnetic resonance imaging (MRI) scanning offers more comprehensive information, such as better contrast and clarity, than any alternative scanning process. Manually separating BTs from several MRI images gathered in medical practice for cancer analysis is challenging and time-consuming.
View Article and Find Full Text PDFJ Neurosurg Pediatr
January 2025
1Neurotology Unit, Department of Neurosurgery, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow; and.
Objective: The objective of this study was to discuss the characteristics of intracranial extension in patients with juvenile nasopharyngeal angiofibroma (JNA) and propose and an algorithm for its management.
Methods: A retrospective chart review of all patients with JNA who underwent operations between January 2013 and January 2023 was done, and those cases with intracranial extension categorized as stage IIIb, IVa, and IVb according to the Andrews modification of the Fisch staging classification were included in the study. Data were collected about age at presentation, symptoms, radiological findings, routes of intracranial extension, therapeutic management, and follow-up.
PLoS One
January 2025
Department of Computer Science, National Textile University, Faisalabad, Pakistan.
Accurate diagnosis of pancreatic cancer using CT scan images is critical for early detection and treatment, potentially saving numerous lives globally. Manual identification of pancreatic tumors by radiologists is challenging and time-consuming due to the complex nature of CT scan images and variations in tumor shape, size, and location of the pancreatic tumor also make it challenging to detect and classify different types of tumors. Thus, to address this challenge we proposed a four-stage framework of computer-aided diagnosis systems.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!