The training and inference of Graph Neural Networks (GNNs) are costly when scaling up to large-scale graphs. Graph Lottery Ticket (GLT) has presented the first attempt to accelerate GNN inference on large-scale graphs by jointly pruning the graph structure and the model weights. Though promising, GLT encounters robustness and generalization issues when deployed in real-world scenarios, which are also long-standing and critical problems in deep learning ideology. In real-world scenarios, the distribution of unseen test data is typically diverse. We attribute the failures on out-of-distribution (OOD) data to the incapability of discerning causal patterns, which remain stable amidst distribution shifts. In traditional spase graph learning, the model performance deteriorates dramatically as the graph/network sparsity exceeds a certain high level. Worse still, the pruned GNNs are hard to generalize to unseen graph data due to limited training set at hand. To tackle these issues, we propose the Resilient Graph Lottery Ticket (RGLT) to find more robust and generalizable GLT in GNNs. Concretely, we reactivate a fraction of weights/edges by instantaneous gradient information at each pruning point. After sufficient pruning, we conduct environmental interventions to extrapolate potential test distribution. Finally, we perform last several rounds of model averages to further improve generalization. We provide multiple examples and theoretical analyses that underpin the universality and reliability of our proposal. Further, RGLT has been experimentally verified across various independent identically distributed (IID) and out-of-distribution (OOD) graph benchmarks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2023.3342184 | DOI Listing |
Neural Netw
December 2024
Department of Electrical Engineering, Imperial College London, London SW7 2BX, UK. Electronic address:
Graph neural networks (GNNs) have become a popular approach for semi-supervised graph representation learning. GNNs research has generally focused on improving methodological details, whereas less attention has been paid to exploring the importance of labeling the data. However, for semi-supervised learning, the quality of training data is vital.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
May 2024
The training and inference of Graph Neural Networks (GNNs) are costly when scaling up to large-scale graphs. Graph Lottery Ticket (GLT) has presented the first attempt to accelerate GNN inference on large-scale graphs by jointly pruning the graph structure and the model weights. Though promising, GLT encounters robustness and generalization issues when deployed in real-world scenarios, which are also long-standing and critical problems in deep learning ideology.
View Article and Find Full Text PDFNeural Comput
October 2023
Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A.
Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
October 2024
Graph neural networks (GNNs) tend to suffer from high computation costs due to the exponentially increasing scale of graph data and a large number of model parameters, which restricts their utility in practical applications. To this end, some recent works focus on sparsifying GNNs (including graph structures and model parameters) with the lottery ticket hypothesis (LTH) to reduce inference costs while maintaining performance levels. However, the LTH-based methods suffer from two major drawbacks: 1) they require exhaustive and iterative training of dense models, resulting in an extremely large training computation cost, and 2) they only trim graph structures and model parameters but ignore the node feature dimension, where vast redundancy exists.
View Article and Find Full Text PDFMed Decis Making
May 2015
EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands (JPO, DRMT).
Background: Quantitative risk information plays an important role in decision making about health. This study focuses on commonly used numerical and graphical formats and examines their effect on perception of different likelihoods and choice preferences.
Methods: An experimental study was conducted with 192 participants, who evaluated 2 sets of 4 lotteries.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!