Compact representation of graph data is a fundamental problem in pattern recognition and machine learning area. Recently, graph neural networks (GNNs) have been widely studied for graph-structured data representation and learning tasks, such as graph semi-supervised learning, clustering, and low-dimensional embedding. In this article, we present graph propagation-embedding networks (GPENs), a new model for graph-structured data representation and learning problem. GPENs are mainly motivated by 1) revisiting of traditional graph propagation techniques for graph node context-aware feature representation and 2) recent studies on deeply graph embedding and neural network architecture. GPENs integrate both feature propagation on graph and low-dimensional embedding simultaneously into a unified network using a novel propagation-embedding architecture. GPENs have two main advantages. First, GPENs can be well-motivated and explained from feature propagation and deeply learning architecture. Second, the equilibrium representation of the propagation-embedding operation in GPENs has both exact and approximate formulations, both of which have simple closed-form solutions. This guarantees the compactivity and efficiency of GPENs. Third, GPENs can be naturally extended to multiple GPENs (M-GPENs) to address the data with multiple graph structures. Experiments on various semi-supervised learning tasks on several benchmark datasets demonstrate the effectiveness and benefits of the proposed GPENs and M-GPENs.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2021.3120100DOI Listing

Publication Analysis

Top Keywords

gpens
11
graph
10
graph data
8
graph propagation-embedding
8
propagation-embedding networks
8
graph-structured data
8
data representation
8
representation learning
8
learning tasks
8
semi-supervised learning
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!