Artificial intelligence techniques play a pivotal role in the accurate identification of drug-drug interaction (DDI) events, thereby informing clinical decisions and treatment regimens. While existing DDI prediction models have made significant progress by leveraging sequence features such as chemical substructures, targets, and enzymes, they often face limitations in integrating and effectively utilizing multi-modal drug representations. To address these limitations, this study proposes a novel multi-modal feature fusion model for DDI event prediction: MMDDI-SSE. Our approach integrates drug sequence modality with DDI graph representations through a novel architecture that employs static subgraph generation to capture structural properties. The model utilizes a graph autoencoder architecture to learn both local and global topological features from these subgraphs, while simultaneously processing diverse sequence-based characteristics including semantically enhanced pharmacodynamic features, chemical substructures, target proteins, and enzyme information. Through comprehensive evaluation on two distinct datasets, MMDDI-SSE demonstrates superior predictive performance compared to state-of-the-art baselines. Ablation studies further validate the effectiveness of each architectural component in enhancing DDI prediction accuracy. The implementation code and datasets are available at https://github.com/Tomchen1231/MMDDI-SSE.

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2025.3550019DOI Listing

Publication Analysis

Top Keywords

novel multi-modal
8
multi-modal feature
8
feature fusion
8
fusion model
8
static subgraph
8
drug-drug interaction
8
event prediction
8
ddi prediction
8
features chemical
8
chemical substructures
8

Similar Publications

Within a recent decade, graph neural network (GNN) has emerged as a powerful neural architecture for various graph-structured data modelling and task-driven representation learning problems. Recent studies have highlighted the remarkable capabilities of GNNs in handling complex graph representation learning tasks, achieving state-of-the-art results in node/graph classification, regression, and generation. However, most traditional GNN-based architectures like GCN and GraphSAGE still faced several challenges related to the capability of preserving the multi-scaled topological structures.

View Article and Find Full Text PDF

Supervised Cross-Modal Retrieval (SCMR) achieves significant performance with the supervision provided by substantial label annotations of multi-modal data. However, the requirement for large annotated multi-modal datasets restricts the use of supervised cross-modal retrieval in many practical scenarios. Active Learning (AL) has been proposed to reduce labeling costs while improving performance in various label-dependent tasks, in which the most informative unlabeled samples are selected for labeling and training.

View Article and Find Full Text PDF

Purpose: The multi-modality imaging system offers optimal fused images for safe and precise interventions in modern clinical practices, such as computed tomography-ultrasound (CT-US) guidance for needle insertion. However, the limited dexterity and mobility of current imaging devices hinder their integration into standardized workflows and the advancement toward fully autonomous intervention systems. In this paper, we present a novel clinical setup where robotic cone beam computed tomography (CBCT) and robotic US are pre-calibrated and dynamically co-registered, enabling new clinical applications.

View Article and Find Full Text PDF

Currently, static fluorescent anti-counterfeiting technology struggles to cope with the increasingly sophisticated counterfeiting techniques, making the dynamic multimode regulation scheme an urgent necessity. Herein,  Sm3+ mono-/co-doped LiTaO3 (LTO) phosphors are prepared by high temperature solid state method. Under 254 nm excitation, the emission chromaticity of LTO: Tb3+, Sm3+ is modulated from green to yellow by increasing Sm3+ content due to Tb3+ → Sm3+ energy transfer.

View Article and Find Full Text PDF

Artificial intelligence techniques play a pivotal role in the accurate identification of drug-drug interaction (DDI) events, thereby informing clinical decisions and treatment regimens. While existing DDI prediction models have made significant progress by leveraging sequence features such as chemical substructures, targets, and enzymes, they often face limitations in integrating and effectively utilizing multi-modal drug representations. To address these limitations, this study proposes a novel multi-modal feature fusion model for DDI event prediction: MMDDI-SSE.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!