The two major challenges to deep-learning-based medical image segmentation are multi-modality and a lack of expert annotations. Existing semi-supervised segmentation models can mitigate the problem of insufficient annotations by utilizing a small amount of labeled data. However, most of these models are limited to single-modal data and cannot exploit the complementary information from multi-modal medical images. A few semi-supervised multi-modal models have been proposed recently, but they have rigid structures and require additional training steps for each modality. In this work, we propose a novel flexible method, semi-supervised multi-modal medical image segmentation with unified translation (SMSUT), and a unique semi-supervised procedure that can leverage multi-modal information to improve the semi-supervised segmentation performance. Our architecture capitalizes on unified translation to extract complementary information from multi-modal data which compels the network to focus on the disparities and salient features among each modality. Furthermore, we impose constraints on the model at both pixel and feature levels, to cope with the lack of annotation information and the diverse representations within semi-supervised multi-modal data. We introduce a novel training procedure tailored for semi-supervised multi-modal medical image analysis, by integrating the concept of conditional translation. Our method has a remarkable ability for seamless adaptation to varying numbers of distinct modalities in the training data. Experiments show that our model exceeds the semi-supervised segmentation counterparts in the public datasets which proves our network's high-performance capabilities and the transferability of our proposed method. The code of our method will be openly available at https://github.com/Sue1347/SMSUT-MedicalImgSegmentation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2024.108570 | DOI Listing |
Comput Med Imaging Graph
December 2024
Zhejiang Cancer Hospital, Zhejiang Cancer Hospital, Hangzhou, 331022, Zhejiang, China; Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310000, Zhejiang, China.
The purpose of this paper is to provide an overview of the developments that have occurred in the Segment Anything Model (SAM) within the medical image segmentation category over the course of the past year. However, SAM has demonstrated notable achievements in adapting to medical image segmentation tasks through fine-tuning on medical datasets, transitioning from 2D to 3D datasets, and optimizing prompting engineering. This is despite the fact that direct application on medical datasets has shown mixed results.
View Article and Find Full Text PDFSensors (Basel)
November 2024
The School of Software Technology, Dalian University of Technology, Dalian 116621, China.
Ship image classification identifies the type of ships in an input image, which plays a significant role in the marine field. To enhance the ship classification performance, various research focuses on studying multi-modal ship classification, which aims at combining the advantages of visible images and infrared images to capture complementary information. However, the current methods simply concatenate features of different modalities to learn complementary information, which neglects the multi-level correlation between different modalities.
View Article and Find Full Text PDFBMC Med Inform Decis Mak
May 2024
Department of Computer Science, Colorado School of Mines, Golden, Colorado, 80401, USA.
Background: Alzheimer's Disease (AD) is a progressive memory disorder that causes irreversible cognitive decline. Given that there is currently no cure, it is critical to detect AD in its early stage during the disease progression. Recently, many statistical learning methods have been presented to identify cognitive decline with temporal data, but few of these methods integrate heterogeneous phenotype and genetic information together to improve the accuracy of prediction.
View Article and Find Full Text PDFComput Biol Med
June 2024
Rochester Institute of Technology, Rochester, NY 14623, USA. Electronic address:
The two major challenges to deep-learning-based medical image segmentation are multi-modality and a lack of expert annotations. Existing semi-supervised segmentation models can mitigate the problem of insufficient annotations by utilizing a small amount of labeled data. However, most of these models are limited to single-modal data and cannot exploit the complementary information from multi-modal medical images.
View Article and Find Full Text PDFBratisl Lek Listy
October 2023
The neuro developmental condition known as Autism Spectrum Disorder (ASD) affects people on a lifetime basis and exhibits itself in a wide range of ways. In this research work a brand-new semi-supervised training method for the recognition of discrete multi-modal autism spectrum disorder is proposed. At the coarse-grained level, we consider that various methodologies are anticipated to explore equivalent information about child autism.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!