Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods through the "seed set expansion problem": given a subset [Formula: see text] of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of "landing probabilities" of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work, we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter [Formula: see text] that depends on the block model parameters. This connection provides a formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance, despite being simple linear classification rules, and are even competitive with belief propagation.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5224398 | PMC |
http://dx.doi.org/10.1073/pnas.1611275114 | DOI Listing |
Proc Int World Wide Web Conf
May 2024
Emory University, Atlanta, GA, USA.
Graph Neural Networks (GNNs) have achieved great success in learning with graph-structured data. Privacy concerns have also been raised for the trained models which could expose the sensitive information of graphs including both node features and the structure information. In this paper, we aim to achieve node-level differential privacy (DP) for training GNNs so that a node and its edges are protected.
View Article and Find Full Text PDFPeerJ Comput Sci
October 2024
School of Cyberspace Security, Zhengzhou University, Zhengzhou, Henan, China.
Inductive link prediction (ILP) in knowledge graphs (KGs) aims to predict missing links between entities that were not seen during the training phase. Recent some subgraph-based methods have shown some advancements, but they all overlook the relational semantics between entities during subgraph extraction. To overcome this limitation, we introduce a novel inductive link prediction model named SASILP (Structure and Semantic Inductive Link Prediction), which comprehensively incorporates relational semantics in both subgraph extraction and node initialization processes.
View Article and Find Full Text PDFNeural Netw
January 2025
School of Economics and Management, University of Chinese Academy of Sciences, Beijing, 100190, China. Electronic address:
Epilepsia
November 2024
Department of Neurology, School of Medicine, Emory University, Atlanta, Georgia, USA.
Chaos
August 2024
Departamento de Matemática Aplicada, Ciencia e Ingeniería de los Materiales y Tecnología Electrónica, Universidad Rey Juan Carlos, 28933 Móstoles (Madrid), Spain.
In this paper, we explore the PageRank of temporal networks (networks that evolve with time) with time-dependent personalization vectors. We consider both continuous and discrete time intervals and show that the PageRank of a continuous-temporal network can be nicely estimated by the PageRanks of the discrete-temporal networks arising after sampling. Additionally, precise boundaries are given for the estimated influence of the personalization vector on the ranking of a particular node.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!