Studies over the past decade have generated a wealth of molecular data that can be leveraged to better understand cancer risk, progression, and outcomes. However, understanding the progression risk and differentiating long- and short-term survivors cannot be achieved by analyzing data from a single modality due to the heterogeneity of disease. Using a scientifically developed and tested deep-learning approach that leverages aggregate information collected from multiple repositories with multiple modalities (e.g., mRNA, DNA Methylation, miRNA) could lead to a more accurate and robust prediction of disease progression. Here, we propose an autoencoder based multimodal data fusion system, in which a fusion encoder flexibly integrates collective information available through multiple studies with partially coupled data. Our results on a fully controlled simulation-based study have shown that inferring the missing data through the proposed data fusion pipeline allows a predictor that is superior to other baseline predictors with missing modalities. Results have further shown that short- and long-term survivors of glioblastoma multiforme, acute myeloid leukemia, and pancreatic adenocarcinoma can be successfully differentiated with an AUC of 0.94, 0.75, and 0.96, respectively.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8945377PMC
http://dx.doi.org/10.3390/biology11030360DOI Listing

Publication Analysis

Top Keywords

multimodal data
8
data fusion
8
data
7
integration multimodal
4
data disparate
4
disparate sources
4
sources identifying
4
identifying disease
4
disease subtypes
4
subtypes studies
4

Similar Publications

Background: Repeat neurological assessment is standard in cases of severe acute brain injury. However, conventional measures rely on overt behavior. Unfortunately, behavioral responses may be difficult or impossible for some patients.

View Article and Find Full Text PDF

Unlabelled: This study utilized deep learning for bone mineral density (BMD) prediction and classification using biplanar X-ray radiography (BPX) images from Huashan Hospital Medical Checkup Center. Results showed high accuracy and strong correlation with quantitative computed tomography (QCT) results. The proposed models offer potential for screening patients at a high risk of osteoporosis and reducing unnecessary radiation and costs.

View Article and Find Full Text PDF

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Eur Radiol

January 2025

Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea.

Objective: This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists.

Materials And Methods: For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network.

View Article and Find Full Text PDF

Future Directions for Quantitative Systems Pharmacology.

Handb Exp Pharmacol

January 2025

Genentech Inc, South San Francisco, CA, USA.

In this chapter, we envision the future of Quantitative Systems Pharmacology (QSP) which integrates closely with emerging data and technologies including advanced analytics, novel experimental technologies, and diverse and larger datasets. Machine learning (ML) and Artificial Intelligence (AI) will increasingly help QSP modelers to find, prepare, integrate, and exploit larger and diverse datasets, as well as build, parameterize, and simulate models. We picture QSP models being applied during all stages of drug discovery and development: During the discovery stages, QSP models predict the early human experience of in silico compounds created by generative AI.

View Article and Find Full Text PDF

Background: Adherence in rehabilitation services includes attending appointments, regularly performing prescribed exercises, and correct exercise execution. The Exercise Adherence Rating Scale (EARS) has been adapted into several languages, but there is lack of a standardized tool for various Indian languages and cultural contexts, particularly for use with cancer survivors. With the anticipated 57.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!