Purpose: State-of-the-art automated segmentation methods achieve exceptionally high performance on the Brain Tumor Segmentation (BraTS) challenge, a dataset of uniformly processed and standardized magnetic resonance generated images (MRIs) of gliomas. However, a reasonable concern is that these models may not fare well on clinical MRIs that do not belong to the specially curated BraTS dataset. Research using the previous generation of deep learning models indicates significant performance loss on cross-institutional predictions. Here, we evaluate the cross-institutional applicability and generalzsability of state-of-the-art deep learning models on new clinical data.

Methods: We train a state-of-the-art 3D U-Net model on the conventional BraTS dataset comprising low- and high-grade gliomas. We then evaluate the performance of this model for automatic tumor segmentation of brain tumors on in-house clinical data. This dataset contains MRIs of different tumor types, resolutions, and standardization than those found in the BraTS dataset. Ground truth segmentations to validate the automated segmentation for in-house clinical data were obtained from expert radiation oncologists.

Results: We report average Dice scores of 0.764, 0.648, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively, in the clinical MRIs. These means are higher than numbers reported previously on same institution and cross-institution datasets of different origin using different methods. There is no statistically significant difference when comparing the dice scores to the inter-annotation variability between two expert clinical radiation oncologists. Although performance on the clinical data is lower than on the BraTS data, these numbers indicate that models trained on the BraTS dataset have impressive segmentation performance on previously unseen images obtained at a separate clinical institution. These images differ in the imaging resolutions, standardization pipelines, and tumor types from the BraTS data.

Conclusions: State-of-the-art deep learning models demonstrate promising performance on cross-institutional predictions. They considerably improve on previous models and can transfer knowledge to new types of brain tumors without additional modeling.

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.16321DOI Listing

Publication Analysis

Top Keywords

brats dataset
16
tumor segmentation
12
deep learning
12
learning models
12
clinical data
12
clinical
8
tumor
8
brain tumor
8
automated segmentation
8
clinical mris
8

Similar Publications

Generating 3D brain tumor regions in MRI using vector-quantization Generative Adversarial Networks.

Comput Biol Med

December 2024

Department of Computer Science, University of Toronto, 40 St George St., Toronto, M5S 2E4, ON, Canada; Neurosciences & Mental Health Research Program, The Hospital for Sick Children, 686 Bay St., Toronto, M5G 0A4, ON, Canada; Department of Diagnostic and Interventional Radiology, The Hospital for Sick Children, 170 Elizabeth St., Toronto, M5G 1H3, ON, Canada; Institute of Medical Science, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, ON, Canada; Department of Medical Imaging, University of Toronto, 263 McCaul St., Toronto, M5T 1W7, ON, Canada; Department of Mechanical and Industrial Engineering, University of Toronto, 5 King's College Road, Toronto, M5S 3G8, ON, Canada. Electronic address:

Medical image analysis has significantly benefited from advancements in deep learning, particularly in the application of Generative Adversarial Networks (GANs) for generating realistic and diverse images that can augment training datasets. The common GAN-based approach is to generate entire image volumes, rather than the region of interest (ROI). Research on deep learning-based brain tumor classification using MRI has shown that it is easier to classify the tumor ROIs compared to the entire image volumes.

View Article and Find Full Text PDF

Purpose: Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.

Methods: This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations.

View Article and Find Full Text PDF

Deep learning-based overall survival prediction in patients with glioblastoma: An automatic end-to-end workflow using pre-resection basic structural multiparametric MRIs.

Comput Biol Med

December 2024

Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA. Electronic address:

Article Synopsis
  • The study focuses on developing a deep learning workflow to predict overall survival in glioblastoma (GBM) patients using pre-resection multiparametric MRI images, addressing the need for timely treatment decisions due to the poor prognosis of GBM.
  • The process involves a series of models for skull-stripping, GBM sub-region segmentation, and survival prediction, using two datasets: a multi-institutional public dataset and an institutional dataset from a clinical trial, which includes various patient data like MRI scans and survival days.
  • Results show the workflow has a strong predictive capability, achieving an AUC of 0.86 for the public dataset and 0.72 for the institutional dataset, successfully classifying patients into long-survivor
View Article and Find Full Text PDF

Unified translation of medical images from one-to-many distinct modalities is desirable in healthcare settings. A ubiquitous approach for bilateral medical scan translation is one-to-one mapping with GANs. However, its efficacy in encapsulating diversity in a pool of medical scans and performing one-to-many translation is questionable.

View Article and Find Full Text PDF

LATUP-Net: A lightweight 3D attention U-Net with parallel convolutions for brain tumor segmentation.

Comput Biol Med

January 2025

School of Computer Science and Informatics, Cardiff University, Cardiff, CF24 4AG, UK. Electronic address:

Early-stage 3D brain tumor segmentation from magnetic resonance imaging (MRI) scans is crucial for prompt and effective treatment. However, this process faces the challenge of precise delineation due to the tumors' complex heterogeneity. Moreover, energy sustainability targets and resource limitations, especially in developing countries, require efficient and accessible medical imaging solutions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!