Purpose: State-of-the-art automated segmentation methods achieve exceptionally high performance on the Brain Tumor Segmentation (BraTS) challenge, a dataset of uniformly processed and standardized magnetic resonance generated images (MRIs) of gliomas. However, a reasonable concern is that these models may not fare well on clinical MRIs that do not belong to the specially curated BraTS dataset. Research using the previous generation of deep learning models indicates significant performance loss on cross-institutional predictions. Here, we evaluate the cross-institutional applicability and generalzsability of state-of-the-art deep learning models on new clinical data.
Methods: We train a state-of-the-art 3D U-Net model on the conventional BraTS dataset comprising low- and high-grade gliomas. We then evaluate the performance of this model for automatic tumor segmentation of brain tumors on in-house clinical data. This dataset contains MRIs of different tumor types, resolutions, and standardization than those found in the BraTS dataset. Ground truth segmentations to validate the automated segmentation for in-house clinical data were obtained from expert radiation oncologists.
Results: We report average Dice scores of 0.764, 0.648, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively, in the clinical MRIs. These means are higher than numbers reported previously on same institution and cross-institution datasets of different origin using different methods. There is no statistically significant difference when comparing the dice scores to the inter-annotation variability between two expert clinical radiation oncologists. Although performance on the clinical data is lower than on the BraTS data, these numbers indicate that models trained on the BraTS dataset have impressive segmentation performance on previously unseen images obtained at a separate clinical institution. These images differ in the imaging resolutions, standardization pipelines, and tumor types from the BraTS data.
Conclusions: State-of-the-art deep learning models demonstrate promising performance on cross-institutional predictions. They considerably improve on previous models and can transfer knowledge to new types of brain tumors without additional modeling.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1002/mp.16321 | DOI Listing |
Comput Biol Med
December 2024
Department of Computer Science, University of Toronto, 40 St George St., Toronto, M5S 2E4, ON, Canada; Neurosciences & Mental Health Research Program, The Hospital for Sick Children, 686 Bay St., Toronto, M5G 0A4, ON, Canada; Department of Diagnostic and Interventional Radiology, The Hospital for Sick Children, 170 Elizabeth St., Toronto, M5G 1H3, ON, Canada; Institute of Medical Science, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, ON, Canada; Department of Medical Imaging, University of Toronto, 263 McCaul St., Toronto, M5T 1W7, ON, Canada; Department of Mechanical and Industrial Engineering, University of Toronto, 5 King's College Road, Toronto, M5S 3G8, ON, Canada. Electronic address:
Medical image analysis has significantly benefited from advancements in deep learning, particularly in the application of Generative Adversarial Networks (GANs) for generating realistic and diverse images that can augment training datasets. The common GAN-based approach is to generate entire image volumes, rather than the region of interest (ROI). Research on deep learning-based brain tumor classification using MRI has shown that it is easier to classify the tumor ROIs compared to the entire image volumes.
View Article and Find Full Text PDFBMC Med Imaging
December 2024
Institute of Medical Science, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada.
Purpose: Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.
Methods: This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations.
Comput Biol Med
December 2024
Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA. Electronic address:
Comput Biol Med
December 2024
Department of Chemical Engineering, IIT Delhi, India; Yardi School of Artificial Intelligence, IIT Delhi, India. Electronic address:
Unified translation of medical images from one-to-many distinct modalities is desirable in healthcare settings. A ubiquitous approach for bilateral medical scan translation is one-to-one mapping with GANs. However, its efficacy in encapsulating diversity in a pool of medical scans and performing one-to-many translation is questionable.
View Article and Find Full Text PDFComput Biol Med
January 2025
School of Computer Science and Informatics, Cardiff University, Cardiff, CF24 4AG, UK. Electronic address:
Early-stage 3D brain tumor segmentation from magnetic resonance imaging (MRI) scans is crucial for prompt and effective treatment. However, this process faces the challenge of precise delineation due to the tumors' complex heterogeneity. Moreover, energy sustainability targets and resource limitations, especially in developing countries, require efficient and accessible medical imaging solutions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!