A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

A Dual Robust Graph Neural Network Against Graph Adversarial Attacks. | LitMetric

A Dual Robust Graph Neural Network Against Graph Adversarial Attacks.

Neural Netw

Department of Computer Science, Old Dominion University, Norfolk, VA, 23529, USA. Electronic address:

Published: July 2024

AI Article Synopsis

  • Graph Neural Networks (GNNs) are popular but are vulnerable to adversarial attacks that can manipulate their graph structures, posing security risks.
  • To improve GNN robustness, existing methods struggle to preserve node similarity while learning representations needed for edge reweighting and lack information on adversarial changes.
  • The proposed Dual Robust Graph Neural Network (DualRGNN) enhances GNN resilience by refining graphs to maintain node similarities and using adversarial-supervised methods to better identify harmful edges, showing strong performance in tests on multiple datasets.

Article Abstract

Graph Neural Networks (GNNs) have gained widespread usage and achieved remarkable success in various real-world applications. Nevertheless, recent studies reveal the vulnerability of GNNs to graph adversarial attacks that fool them by modifying graph structure. This vulnerability undermines the robustness of GNNs and poses significant security and privacy risks across various applications. Hence, it is crucial to develop robust GNN models that can effectively defend against such attacks. One simple approach is to remodel the graph. However, most existing methods cannot fully preserve the similarity relationship among the original nodes while learning the node representation required for reweighting the edges. Furthermore, they lack supervision information regarding adversarial perturbations, hampering their ability to recognize adversarial edges. To address these limitations, we propose a novel Dual Robust Graph Neural Network (DualRGNN) against graph adversarial attacks. DualRGNN first incorporates a node-similarity-preserving graph refining (SPGR) module to prune and refine the graph based on the learned node representations, which contain the original nodes' similarity relationships, weakening the poisoning of graph adversarial attacks on graph data. DualRGNN then employs an adversarial-supervised graph attention (ASGAT) network to enhance the model's capability in identifying adversarial edges by treating these edges as supervised signals. Through extensive experiments conducted on four benchmark datasets, DualRGNN has demonstrated remarkable robustness against various graph adversarial attacks.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.106276DOI Listing

Publication Analysis

Top Keywords

graph adversarial
20
adversarial attacks
20
graph
14
graph neural
12
dual robust
8
robust graph
8
neural network
8
adversarial
8
attacks graph
8
adversarial edges
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!