On Explainability of Graph Neural Networks via Subgraph Explorations Academic Article uri icon

abstract

  • We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

altmetric score

  • 23.55

author list (cited authors)

  • Yuan, H., Yu, H., Wang, J., Li, K., & Ji, S.

citation count

  • 0

complete list of authors

  • Yuan, Hao||Yu, Haiyang||Wang, Jie||Li, Kang||Ji, Shuiwang

publication date

  • February 2021