Computer Tomography (CT) is an imaging procedure that combines many X-ray measurements taken from different angles. The segmentation of areas in the CT images provides a valuable aid to physicians and radiologists in order to better provide a patient diagnose. The CT scans of a body torso usually include different neighboring internal body organs. Deep learning has become the state-of-the-art in medical image segmentation. For such techniques, in order to perform a successful segmentation, it is of great importance that the network learns to focus on the organ of interest and surrounding structures and also that the network can detect target regions of different sizes. In this paper, we propose the extension of a popular deep learning methodology, Convolutional Neural Networks (CNN), by including deep supervision and attention gates. Our experimental evaluation shows that the inclusion of attention and deep supervision results in consistent improvement of the tumor prediction accuracy across the different datasets and training sizes while adding minimal computational overhead.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805665 | PMC |
http://dx.doi.org/10.3389/frobt.2020.00106 | DOI Listing |
Am J Pathol
January 2025
The Seventh Affiliated Hospital, Sun Yat-Sen University, 628 Zhenyuan Road, Xinhu Street, Guangming New District, Shenzhen, 518107, Guangdong, China. Electronic address:
Colorectal cancer (CRC) is one of the top three most lethal malignancies worldwide, posing a significant threat to human health. Recently proposed immunotherapy checkpoint blockade treatments have proven effective for CRC, but their use depends on measuring specific biomarkers in patients. Among these biomarkers, Tumor Mutational Burden (TMB) has emerged as a novel indicator, traditionally requiring Next-Generation Sequencing (NGS) for measurement, which is time-consuming, labor-intensive, and costly.
View Article and Find Full Text PDFBioinformatics
January 2025
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
Motivation: Ensuring connectivity and preventing fractures in tubular object segmentation are critical for downstream analyses. Despite advancements in deep neural networks (DNNs) that have significantly improved tubular object segmentation, existing methods still face limitations. They often rely heavily on precise annotations, hindering their scalability to large-scale unlabeled image datasets.
View Article and Find Full Text PDFRadiother Oncol
January 2025
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology Atlanta, GA 30308, USA. Electronic address:
Purpose: This study aims to develop a robust, large-scale deep learning model for medical image segmentation, leveraging self-supervised learning to overcome the limitations of supervised learning and data variability in clinical settings.
Methods And Materials: We curated a substantial multi-center CT dataset for self-supervised pre-training using masked image modeling with sparse submanifold convolution. We designed a series of Sparse Submanifold U-Nets (SS-UNets) of varying sizes and performed self-supervised pre-training.
Med Image Anal
January 2025
Nuffield Department of Medicine, University of Oxford, Oxford, UK; Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK; Ludwig Institute for Cancer Research, Nuffield Department of Clinical Medicine, University of Oxford, Oxford, UK; Oxford National Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, UK. Electronic address:
Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!