Magnetic resonance imaging (MRI), ultrasound (US), and contrast-enhanced ultrasound (CEUS) can provide different image data about uterus, which have been used in the preoperative assessment of endometrial cancer. In practice, not all the patients have complete multi-modality medical images due to the high cost or long examination period. Most of the existing methods need to perform data cleansing or discard samples with missing modalities, which will influence the performance of the model. In this work, we propose an incomplete multi-modality images data fusion method based on latent relation shared to overcome this limitation. The shared space contains the common latent feature representation and modality-specific latent feature representation from the complete and incomplete multi-modality data, which jointly exploits both consistent and complementary information among multiple images. The experimental results show that our method outperforms the current representative approaches in terms of classification accuracy, sensitivity, specificity, and area under curve (AUC). Furthermore, our method performs well under varying imaging missing rates.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11332793PMC
http://dx.doi.org/10.1016/j.isci.2024.110509DOI Listing

Publication Analysis

Top Keywords

incomplete multi-modality
12
latent relation
8
relation shared
8
endometrial cancer
8
multi-modality medical
8
medical images
8
latent feature
8
feature representation
8
latent
4
shared learning
4

Similar Publications

Domain-specific information preservation for Alzheimer's disease diagnosis with incomplete multi-modality neuroimages.

Med Image Anal

January 2025

School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China. Electronic address:

Article Synopsis
  • Multi-modality neuroimages are crucial for diagnosing Alzheimer's Disease (AD) but often face challenges due to missing data, which can hinder clinical practice.
  • Recent attempts to impute missing data may skip over important differences in imaging characteristics among modalities, which are essential for accurate diagnosis.
  • The proposed domain-specific information preservation (DSIP) framework includes a generative adversarial network for better imputation and a specialized network for improving diagnosis accuracy, outperforming existing methods.
View Article and Find Full Text PDF

Objective: To establish a deep learning model for testing the feasibility of combining magnetic resonance imaging (MRI) deep learning features with clinical features for preoperative prediction of cytokeratin 19 (CK19) status of hepatocellular carcinoma (HCC).

Methods: A retrospective experiment was conducted based on the data of 116 HCC patients with confirmed CK19 status. A single sequence multi-scale feature fusion deep learning model (MSFF-IResnet) and a multi-scale and multimodality feature fusion model (MMFF-IResnet) were established based on the hepatobiliary phase (HBP), diffusion weighted imaging (DWI) sequences of enhanced MRI images, and the clinical features significantly correlated with CK19 status.

View Article and Find Full Text PDF

Magnetic resonance imaging (MRI), ultrasound (US), and contrast-enhanced ultrasound (CEUS) can provide different image data about uterus, which have been used in the preoperative assessment of endometrial cancer. In practice, not all the patients have complete multi-modality medical images due to the high cost or long examination period. Most of the existing methods need to perform data cleansing or discard samples with missing modalities, which will influence the performance of the model.

View Article and Find Full Text PDF

Our work focuses on tackling the problem of fine-grained recognition with incomplete multi-modal data, which is overlooked by previous work in the literature. It is desirable to not only capture fine-grained patterns of objects but also alleviate the challenges of missing modalities for such a practical problem. In this paper, we propose to leverage a meta-learning strategy to learn model abilities of both fast modal adaptation and more importantly missing modality completion across a variety of incomplete multi-modality learning tasks.

View Article and Find Full Text PDF

. Effective fusion of histology slides and molecular profiles from genomic data has shown great potential in the diagnosis and prognosis of gliomas. However, it remains challenging to explicitly utilize the consistent-complementary information among different modalities and create comprehensive representations of patients.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!