Graph neural networks (GNNs) have shown great promise in modeling graph-structured data, but the over-smoothing problem restricts their effectiveness in deep layers. Two key weaknesses of existing research on deep GNN models are: (1) ignoring the beneficial aspects of intra-class smoothing while focusing solely on reducing inter-class smoothing, and (2) inefficient computation of residual weights that neglect the influence of neighboring nodes' distributions. To address these weaknesses, we propose a novel Smoothing Deceleration (SD) strategy to reduce the smoothing speed rate of nodes as information propagates between layers, thereby mitigating over-smoothing. Firstly, we analyze the smoothing speed rate of node representations between layers by differential operations. Subsequently, based on this analysis, we introduce two innovative modules: Class-Related Smoothing Deceleration (CR-SD) loss and Smooth Deceleration Residual (NAR). CR-SD loss first takes into account the duality of smoothing, reducing inter-class smoothing while preserving the benefits of intra-class smoothing, thus reducing over-smoothing while maintaining model performance. NAR is specifically designed for graph-structured data, integrating the distribution of neighboring nodes, and is a novel method for computing residual weights. Finally, the comparative experimental results demonstrate that our SD strategy can extend existing shallow GNNs to deeper and delivers superior performance compared to both vanilla models and existing deep GNNs. And, a series of analytical experiments be conducted to prove that our proposed SD strategy effectively mitigates over-smoothing in deep GNNs. The source code for this work is available at https://github.com/cheng-qi/sd.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2025.107132 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!