Improving CT Image Tumor Segmentation Through Deep Supervision and Attentional Gates.

Front Robot AI

Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria.

Published: August 2020

Computer Tomography (CT) is an imaging procedure that combines many X-ray measurements taken from different angles. The segmentation of areas in the CT images provides a valuable aid to physicians and radiologists in order to better provide a patient diagnose. The CT scans of a body torso usually include different neighboring internal body organs. Deep learning has become the state-of-the-art in medical image segmentation. For such techniques, in order to perform a successful segmentation, it is of great importance that the network learns to focus on the organ of interest and surrounding structures and also that the network can detect target regions of different sizes. In this paper, we propose the extension of a popular deep learning methodology, Convolutional Neural Networks (CNN), by including deep supervision and attention gates. Our experimental evaluation shows that the inclusion of attention and deep supervision results in consistent improvement of the tumor prediction accuracy across the different datasets and training sizes while adding minimal computational overhead.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805665PMC
http://dx.doi.org/10.3389/frobt.2020.00106DOI Listing

Publication Analysis

Top Keywords

deep supervision
12
deep learning
8
deep
5
improving image
4
image tumor
4
segmentation
4
tumor segmentation
4
segmentation deep
4
supervision attentional
4
attentional gates
4

Similar Publications

KMeansGraphMIL: A Weakly Supervised Multiple Instance Learning Model for Predicting Colorectal Cancer Tumor Mutational Burden.

Am J Pathol

January 2025

The Seventh Affiliated Hospital, Sun Yat-Sen University, 628 Zhenyuan Road, Xinhu Street, Guangming New District, Shenzhen, 518107, Guangdong, China. Electronic address:

Colorectal cancer (CRC) is one of the top three most lethal malignancies worldwide, posing a significant threat to human health. Recently proposed immunotherapy checkpoint blockade treatments have proven effective for CRC, but their use depends on measuring specific biomarkers in patients. Among these biomarkers, Tumor Mutational Burden (TMB) has emerged as a novel indicator, traditionally requiring Next-Generation Sequencing (NGS) for measurement, which is time-consuming, labor-intensive, and costly.

View Article and Find Full Text PDF

Motivation: Ensuring connectivity and preventing fractures in tubular object segmentation are critical for downstream analyses. Despite advancements in deep neural networks (DNNs) that have significantly improved tubular object segmentation, existing methods still face limitations. They often rely heavily on precise annotations, hindering their scalability to large-scale unlabeled image datasets.

View Article and Find Full Text PDF
Article Synopsis
  • Deep learning methods show strong potential for predicting lung cancer risk from CT scans, but there's a need for more comprehensive comparisons and validations of these models in real-world settings.
  • The study reviews 21 state-of-the-art deep learning models, analyzing their performance using CT scans from a subset of the National Lung Screening Trial, with a focus on malignant versus benign classification.
  • Results reveal that 3D deep learning models generally outperformed 2D models, with the best 3D model achieving an AUROC of 0.86 compared to 0.79 for the best 2D model, emphasizing the need to choose appropriate pretrained datasets and model types for effective lung cancer risk prediction.
View Article and Find Full Text PDF

Automatic medical imaging segmentation via self-supervising large-scale convolutional neural networks.

Radiother Oncol

January 2025

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology Atlanta, GA 30308, USA. Electronic address:

Purpose: This study aims to develop a robust, large-scale deep learning model for medical image segmentation, leveraging self-supervised learning to overcome the limitations of supervised learning and data variability in clinical settings.

Methods And Materials: We curated a substantial multi-center CT dataset for self-supervised pre-training using masked image modeling with sparse submanifold convolution. We designed a series of Sparse Submanifold U-Nets (SS-UNets) of varying sizes and performed self-supervised pre-training.

View Article and Find Full Text PDF

Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology.

Med Image Anal

January 2025

Nuffield Department of Medicine, University of Oxford, Oxford, UK; Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK; Ludwig Institute for Cancer Research, Nuffield Department of Clinical Medicine, University of Oxford, Oxford, UK; Oxford National Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, UK. Electronic address:

Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!