Recent advancements in Graph Neural Networks (GNNs) show promise for various applications like social networks and financial networks. However, they exhibit fairness issues, particularly in human-related decision contexts, risking unfair treatment of groups historically subject to discrimination. While several visual analytics studies have explored fairness in machine learning (ML), few have tackled the particular challenges posed by GNNs. We propose a visual analytics framework for GNN fairness analysis, offering insights into how attribute and structural biases may introduce model bias. Our framework is model-agnostic and tailored for real-world scenarios with multiple and multinary sensitive attributes, utilizing an extended suite of fairness metrics. To operationalize the framework, we develop GNNFairViz, a visual analysis tool that integrates seamlessly into the GNN development workflow, offering interactive visualizations. Our tool enables GNN model developers, the target users, to analyze model bias comprehensively, facilitating node selection, fairness inspection, and diagnostics. We evaluate our approach through two usage scenarios and expert interviews, confirming its effectiveness and usability in GNN fairness analysis. Furthermore, we summarize two general insights into GNN fairness based on our observations on the usage of GNNFairViz, highlighting the prevalence of the "Overwhelming Effect" in highly unbalanced datasets and the importance of suitable GNN architecture selection for bias mitigation.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TVCG.2025.3542419DOI Listing

Publication Analysis

Top Keywords

gnn fairness
12
gnnfairviz visual
8
visual analysis
8
graph neural
8
fairness
8
visual analytics
8
fairness analysis
8
model bias
8
gnn
6
analysis
4

Similar Publications

Recent advancements in Graph Neural Networks (GNNs) show promise for various applications like social networks and financial networks. However, they exhibit fairness issues, particularly in human-related decision contexts, risking unfair treatment of groups historically subject to discrimination. While several visual analytics studies have explored fairness in machine learning (ML), few have tackled the particular challenges posed by GNNs.

View Article and Find Full Text PDF

Promoting fairness in link prediction with graph enhancement.

Front Big Data

October 2024

Donald Bren School of Information and Computer Sciences, University of California, Irvine, Irvine, CA, United States.

Link prediction is a crucial task in network analysis, but it has been shown to be prone to biased predictions, particularly when links are unfairly predicted between nodes from different sensitive groups. In this paper, we study the fair link prediction problem, which aims to ensure that the predicted link probability is independent of the sensitive attributes of the connected nodes. Existing methods typically incorporate debiasing techniques within graph embeddings to mitigate this issue.

View Article and Find Full Text PDF

By providing personalized suggestions to users, recommender systems have become essential to numerous online platforms. Collaborative filtering, particularly graph-based approaches using Graph Neural Networks (GNNs), have demonstrated great results in terms of recommendation accuracy. However, accuracy may not always be the most important criterion for evaluating recommender systems' performance, since beyond-accuracy aspects such as recommendation diversity, serendipity, and fairness can strongly influence user engagement and satisfaction.

View Article and Find Full Text PDF

Node representation learning has attracted increasing attention due to its efficacy for various applications on graphs. However, fairness is a largely under-explored territory within the field, although it is shown that the use of graph structure in learning amplifies bias. To this end, this work theoretically explains the sources of bias in node representations obtained via graph neural networks (GNNs).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!