Self-Supervised Learning of Graph Neural Networks: A Unified Review Academic Article uri icon

abstract

  • Deep models trained in supervised mode have achieved remarkable success on a variety of tasks. When labeled samples are limited, self-supervised learning (SSL) is emerging as a new paradigm for making use of large amounts of unlabeled samples. SSL has achieved promising performance on natural language and image learning tasks. Recently, there is a trend to extend such success to graph data using graph neural networks (GNNs). In this survey, we provide a unified review of different ways of training GNNs using SSL. Specifically, we categorize SSL methods into contrastive and predictive models. In either category, we provide a unified framework for methods as well as how these methods differ in each component under the framework. Our unified treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms. We also summarize different SSL settings and the corresponding datasets used in each setting. To facilitate methodological development and empirical comparison, we develop a standardized testbed for SSL in GNNs, including implementations of common baseline methods, datasets, and evaluation metrics.

author list (cited authors)

  • Xie, Y., Xu, Z., Zhang, J., Wang, Z., & Ji, S.

complete list of authors

  • Xie, Yaochen||Xu, Zhao||Zhang, Jingtun||Wang, Zhengyang||Ji, Shuiwang

publication date

  • February 2021