Unsupervised graph learning techniques have garnered increasing interest among researchers. These methods employ the technique of maximizing mutual information to generate representations of nodes and graphs. We show that these methods are susceptible to backdoor attacks, wherein the adversary can poison a small portion of unlabeled graph data (e.
View Article and Find Full Text PDF