Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.

BMC Med Imaging

Institute of Medical Science, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada.

Published: December 2024

AI Article Synopsis

  • This study focuses on creating a machine learning pipeline that can segment brain tumors in medical images using only binary image-level classification labels, eliminating the need for expensive and time-consuming manual annotations.
  • The method combines a deep superpixel generation model and a clustering model that work together to produce weakly supervised tumor segmentations while utilizing a classifier to improve accuracy by focusing on undersegmented areas.
  • The new pipeline was evaluated using MRI scans from the BraTS 2020 and BraTS 2023 datasets, achieving promising results in segmentation performance compared to existing state-of-the-art methods.

Article Abstract

Purpose: Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.

Methods: This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).

Results: We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.

Conclusion: The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657002PMC
http://dx.doi.org/10.1186/s12880-024-01523-xDOI Listing

Publication Analysis

Top Keywords

deep superpixel
20
superpixel generation
20
weakly supervised
20
supervised segmentation
12
superpixel clustering
12
generation clustering
8
ground truth
8
binary image-level
8
clustering model
8
output weakly
8

Similar Publications

Article Synopsis
  • This study focuses on creating a machine learning pipeline that can segment brain tumors in medical images using only binary image-level classification labels, eliminating the need for expensive and time-consuming manual annotations.
  • The method combines a deep superpixel generation model and a clustering model that work together to produce weakly supervised tumor segmentations while utilizing a classifier to improve accuracy by focusing on undersegmented areas.
  • The new pipeline was evaluated using MRI scans from the BraTS 2020 and BraTS 2023 datasets, achieving promising results in segmentation performance compared to existing state-of-the-art methods.
View Article and Find Full Text PDF

Fire is a dangerous disaster that causes human, ecological, and financial ramifications. Forest fires have increased significantly in recent years due to natural and artificial climatic factors. Therefore, accurate and early prediction of fires is essential.

View Article and Find Full Text PDF

Deep convolutional neural networks (CNNs) have been widely used for fundus image classification and have achieved very impressive performance. However, the explainability of CNNs is poor because of their black-box nature, which limits their application in clinical practice. In this paper, we propose a novel method to search for discriminative regions to increase the confidence of CNNs in the classification of features in specific category, thereby helping users understand which regions in an image are important for a CNN to make a particular prediction.

View Article and Find Full Text PDF

Background/aim: In this study, we introduce an innovative deep-learning model architecture aimed at enhancing the accuracy of detecting and classifying organizing pneumonia (OP), a condition characterized by the presence of Masson bodies within the alveolar spaces due to lung injury. The variable morphology of Masson bodies and their resemblance to adjacent pulmonary structures pose significant diagnostic challenges, necessitating a model capable of discerning subtle textural and structural differences. Our model incorporates a novel architecture that integrates advancements in three key areas: Semantic segmentation, texture analysis, and structural feature recognition.

View Article and Find Full Text PDF

The museum system is exposed to a high risk of seismic hazards. However, it is difficult to carry out seismic hazard prevention to protect cultural relics in collections due to the lack of real data and diverse types of seismic hazards. To address this problem, we developed a deep-learning-based multi-source feature-fusion method to assess the data on seismic damage caused by collected cultural relics.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!