Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs.

Neural Netw

School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100049, China; Research Center on Fictitious Economy and Data Science, Chinese Academy of Sciences, Beijing, 100190, China; Key Laboratory of Big Data Mining and Knowledge Management, University of Chinese Academy of Sciences, Beijing, 100190, China. Electronic address:

Published: November 2024

While Graph Neural Networks (GNNs) have demonstrated their effectiveness in processing non-Euclidean structured data, the neighborhood fetching of GNNs is time-consuming and computationally intensive, making them difficult to deploy in low-latency industrial applications. To address the issue, a feasible solution is graph knowledge distillation (KD), which can learn high-performance student Multi-layer Perceptrons (MLPs) to replace GNNs by mimicking the superior output of teacher GNNs. However, state-of-the-art graph knowledge distillation methods are mainly based on distilling deep features from intermediate hidden layers, this leads to the significance of logit layer distillation being greatly overlooked. To provide a novel viewpoint for studying logits-based KD methods, we introduce the idea of decoupling into graph knowledge distillation. Specifically, we first reformulate the classical graph knowledge distillation loss into two parts, i.e., the target class graph distillation (TCGD) loss and the non-target class graph distillation (NCGD) loss. Next, we decouple the negative correlation between GNN's prediction confidence and NCGD loss, as well as eliminate the fixed weight between TCGD and NCGD. We named this logits-based method Decoupled Graph Knowledge Distillation (DGKD). It can flexibly adjust the weights of TCGD and NCGD for different data samples, thereby improving the prediction accuracy of the student MLP. Extensive experiments conducted on public benchmark datasets show the effectiveness of our method. Additionally, DGKD can be incorporated into any existing graph knowledge distillation framework as a plug-and-play loss function, further improving distillation performance. The code is available at https://github.com/xsk160/DGKD.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.106567DOI Listing

Publication Analysis

Top Keywords

graph knowledge
28
knowledge distillation
28
distillation
11
graph
9
decoupled graph
8
logits-based method
8
class graph
8
graph distillation
8
ncgd loss
8
tcgd ncgd
8

Similar Publications

Adolescence is a period in which peer problems and emotional symptoms markedly increase in prevalence. However, the causal mechanisms regarding how peer problems cause emotional symptoms at a behavioral level and vice versa remain unknown. To address this gap, the present study investigated the longitudinal network of peer problems and emotional symptoms among Australian adolescents aged 12-14 years.

View Article and Find Full Text PDF

Diversity of complementary diet and early food allergy risk.

Pediatr Allergy Immunol

January 2025

Department of Clinical Sciences, Pediatrics, Umeå University, Umeå, Sweden.

Introduction: Diet diversity (DD) in infancy may be protective for early food allergy (FA) but there is limited knowledge about how DD incorporating consumption frequency influences FA risk.

Methods: Three measures of DD were investigated in 2060 infants at 6 and/or at 9 months of age within the NorthPop Birth Cohort Study: a weighted DD score based on intake frequency, the number of introduced foods, and the number of introduced allergenic foods. In multivariable logistic regression models based on directed acyclic graphs, associations to parentally reported physician-diagnosed FA at age 9 and 18 months were estimated, including sensitivity and stratified analyses.

View Article and Find Full Text PDF

Predicting phage-host interaction via hyperbolic Poincaré graph embedding and large-scale protein language technique.

iScience

January 2025

Key Laboratory of Resources Biology and Biotechnology in Western China, Ministry of Education, Provincial Key Laboratory of Biotechnology of Shaanxi Province, the College of Life Sciences, Northwest University, Xi'an 710069, China.

Bacteriophages (phages) are increasingly viewed as a promising alternative for the treatment of antibiotic-resistant bacterial infections. However, the diversity of host ranges complicates the identification of target phages. Existing computational tools often fail to accurately identify phages across different bacterial species.

View Article and Find Full Text PDF

Background: Drug-drug interactions (DDIs) especially antagonistic ones present significant risks to patient safety, underscoring the urgent need for reliable prediction methods. Recently, substructure-based DDI prediction has garnered much attention due to the dominant influence of functional groups and substructures on drug properties. However, existing approaches face challenges regarding the insufficient interpretability of identified substructures and the isolation of chemical substructures.

View Article and Find Full Text PDF

ARCH: Large-scale knowledge graph via aggregated narrative codified health records analysis.

J Biomed Inform

January 2025

Harvard T.H. Chan School of Public Health, 677 Huntington Ave, Boston, 02115, MA, USA; VA Boston Healthcare System, 150 S Huntington Ave, Boston, 02130, MA, USA. Electronic address:

Objective: Electronic health record (EHR) systems contain a wealth of clinical data stored as both codified data and free-text narrative notes (NLP). The complexity of EHR presents challenges in feature representation, information extraction, and uncertainty quantification. To address these challenges, we proposed an efficient Aggregated naRrative Codified Health (ARCH) records analysis to generate a large-scale knowledge graph (KG) for a comprehensive set of EHR codified and narrative features.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!