Accurate segmentation of gastric tumors from computed tomography (CT) images provides useful image information for guiding the diagnosis and treatment of gastric cancer. Researchers typically collect datasets from multiple medical centers to increase sample size and representation, but this raises the issue of data heterogeneity. To this end, we propose a new cross-center 3D tumor segmentation method named unsupervised scale-aware and boundary-aware domain adaptive network (USBDAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale features from the CT images with anisotropic resolution, and a scale-aware and boundary-aware domain alignment (SaBaDA) module for adaptively aligning multi-scale features between two domains and enhancing tumor boundary drawing based on location-related information drawn from each sample across all domains. We evaluate the proposed method on an in-house CT image dataset collected from four medical centers. Our results demonstrate that the proposed method outperforms several state-of-the-art methods.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC40787.2023.10340877DOI Listing

Publication Analysis

Top Keywords

scale-aware boundary-aware
12
boundary-aware domain
12
unsupervised scale-aware
8
domain adaptive
8
adaptive network
8
tumor segmentation
8
medical centers
8
neural network
8
multi-scale features
8
proposed method
8

Similar Publications

Accurate segmentation of gastric tumors from computed tomography (CT) images provides useful image information for guiding the diagnosis and treatment of gastric cancer. Researchers typically collect datasets from multiple medical centers to increase sample size and representation, but this raises the issue of data heterogeneity. To this end, we propose a new cross-center 3D tumor segmentation method named unsupervised scale-aware and boundary-aware domain adaptive network (USBDAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale features from the CT images with anisotropic resolution, and a scale-aware and boundary-aware domain alignment (SaBaDA) module for adaptively aligning multi-scale features between two domains and enhancing tumor boundary drawing based on location-related information drawn from each sample across all domains.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!