Graph contrastive learning with implicit augmentations.

Neural Netw

Discipline of Business Analytics, The University of Sydney Business School, The University of Sydney, Australia. Electronic address:

Published: June 2023

Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level show that the proposed method achieves state-of-the-art accuracy on downstream classification tasks compared to other graph contrastive baselines, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2023.04.001DOI Listing

Publication Analysis

Top Keywords

graph contrastive
16
contrastive learning
12
graph
8
edges nodes
8
augmentations latent
8
learning
4
learning implicit
4
augmentations
4
implicit augmentations
4
augmentations existing
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!