Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2023.107467 | DOI Listing |
Comput Biol Med
December 2024
Aerospace Hi-tech Holding Group Co., LTD, Harbin, Heilongjiang, 150060, China.
CNN-based techniques have achieved impressive outcomes in medical image segmentation but struggle to capture long-term dependencies between pixels. The Transformer, with its strong feature extraction and representation learning abilities, performs exceptionally well within the domain of medical image partitioning. However, there are still shortcomings in bridging local to global connections, resulting in occasional loss of positional information.
View Article and Find Full Text PDFJ Cell Mol Med
December 2024
School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan, China.
Convolutional neural networks (CNNs) are well established in handling local features in visual tasks; yet, they falter in managing complex spatial relationships and long-range dependencies that are crucial for medical image segmentation, particularly in identifying pathological changes. While vision transformer (ViT) excels in addressing long-range dependencies, their ability to leverage local features remains inadequate. Recent ViT variants have merged CNNs to improve feature representation and segmentation outcomes, yet challenges with limited receptive fields and precise feature representation persist.
View Article and Find Full Text PDFQuant Imaging Med Surg
December 2024
Tiktok Inc., San Jose, CA, USA.
Background: Medical image segmentation is crucial for clinical diagnostics and treatment planning. Recently, hybrid models often neglect the local modeling capabilities of Transformers for medical image segmentation, despite the complementary nature of local information from both convolutional neural networks (CNNs) and transformers. This limitation is particularly problematic in multi-organ segmentation, where organs are closely adhered, and accurate delineation is essential.
View Article and Find Full Text PDFQuant Imaging Med Surg
December 2024
The College of Computer and Information Science, Southwest University, Chongqing, China.
Background: Medical image segmentation is crucial for improving healthcare outcomes. Convolutional neural networks (CNNs) have been widely applied in medical image analysis; however, their inherent inductive biases limit their ability to capture global contextual information. Vision transformer (ViT) architectures address this limitation by leveraging attention mechanisms to model global relationships; however, they typically require large-scale datasets for effective training, which is challenging in the field of medical imaging due to limited data availability.
View Article and Find Full Text PDFAcad Radiol
December 2024
Department of Radiology, The First Hospital of Jilin University, No.1, Xinmin Street, Changchun 130021, China (Y.W., M.L., Z.M., J.W., K.H., Q.Y., L.Z., L.M., H.Z.). Electronic address:
Rationale And Objectives: Effective trauma care in emergency departments necessitates rapid diagnosis by interdisciplinary teams using various medical data. This study constructed a multimodal diagnostic model for abdominal trauma using deep learning on non-contrast computed tomography (CT) and unstructured text data, enhancing the speed and accuracy of solid organ assessments.
Materials And Methods: Data were collected from patients undergoing abdominal CT scans.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!