While often represented as static entities, gene networks are highly context-dependent. Here, we developed a multi-task learning strategy to yield context-specific representations of gene network dynamics. We assembled a corpus comprising ~103 million human single-cell transcriptomes from a broad range of tissues and diseases and performed a two stage pretraining, first with non-malignant cells to generate a foundational model and then with continual learning on cancer cells to tune the model to the cancer domain. We performed multi-task learning with the foundational model to learn context-specific representations of a broad range of cell types, tissues, developmental stages, and diseases. We then leveraged the cancer-tuned model to jointly learn cell states and predict tumor-restricting factors within the colorectal tumor microenvironment. Model quantization allowed resource-efficient fine-tuning and inference while preserving biological knowledge. Overall, multi-task learning enables context-specific disease modeling that can yield contextual predictions of candidate therapeutic targets for human disease.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11370383PMC
http://dx.doi.org/10.1101/2024.08.16.608180DOI Listing

Publication Analysis

Top Keywords

multi-task learning
16
context-specific representations
12
representations gene
8
gene network
8
network dynamics
8
broad range
8
foundational model
8
learning
5
model
5
quantized multi-task
4

Similar Publications

Neointimal coverage and stent apposition, as assessed from intravascular optical coherence tomography (IVOCT) images, are crucial for optimizing percutaneous coronary intervention (PCI). Existing state-of-the-art computer algorithms designed to automate this analysis often treat lumen and stent segmentations as separate target entities, applicable only to a single stent type and overlook automation of preselecting which pullback segments need segmentation, thus limit their practicality. This study aimed for an algorithm capable of intelligently handling the entire IVOCT pullback across different phases of PCI and clinical scenarios, including the presence and coexistence of metal and bioresorbable vascular scaffold (BVS), stent types.

View Article and Find Full Text PDF

Optical Coherence Tomography (OCT) offers high-resolution images of the eye's fundus. This enables thorough analysis of retinal health by doctors, providing a solid basis for diagnosis and treatment. With the development of deep learning, deep learning-based methods are becoming more popular for fundus OCT image segmentation.

View Article and Find Full Text PDF

Deep learning analysis of electrocardiography (ECG) may predict cardiovascular outcomes. We present a novel multi-task deep learning model, the ECG-MACE, which predicts the one-year first-ever major adverse cardiovascular events (MACE) using 2,821,889 standard 12-lead ECGs, including training (n = 984,895), validation (n = 422,061), and test (n = 1,414,933) sets, from Chang Gung Memorial Hospital database in Taiwan. Data from another independent medical center (n = 113,224) was retrieved for external validation.

View Article and Find Full Text PDF

Development and external validation of a multi-task feature fusion network for CTV segmentation in cervical cancer radiotherapy.

Radiother Oncol

December 2024

Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Army Medical University, Chongqing 400038, China. Electronic address:

Background And Purpose: Accurate segmentation of the clinical target volume (CTV) is essential to deliver an effective radiation dose to tumor tissues in cervical cancer radiotherapy. Also, although automated CTV segmentation can reduce oncologists' workload, challenges persist due to the microscopic spread of tumor cells undetectable in CT imaging, low-intensity contrast between organs, and inter-observer variability. This study aims to develop and validate a multi-task feature fusion network (MTF-Net) that uses distance-based information to enhance CTV segmentation accuracy.

View Article and Find Full Text PDF

Unified Knowledge-Guided Molecular Graph Encoder with multimodal fusion and multi-task learning.

Neural Netw

December 2024

School of Computer Science, Wuhan University, Luojiashan Road, Wuchang District., Wuhan, 430072, Hubei Province, China; Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, No. 8, Yangqiaohu Avenue, Zanglong Island Development Zone, Jiangxia District, Wuhan, 2007, Hubei Province, China. Electronic address:

The remarkable success of Graph Neural Networks underscores their formidable capacity to assimilate multimodal inputs, markedly enhancing performance across a broad spectrum of domains. In the context of molecular modeling, considerable efforts have been made to enrich molecular representations by integrating data from diverse aspects. Nevertheless, current methodologies frequently compartmentalize geometric and semantic components, resulting in a fragmented approach that impairs the holistic integration of molecular attributes.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!