Awesome
When Does Self-Supervision Help Graph Convolutional Networks?
PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks? [appendix]
Yuning You<sup>*</sup>, Tianlong Chen<sup>*</sup>, Zhangyang Wang, Yang Shen
In ICML 2020.
Overview
Properly designed multi-task self-supervision benefits GCNs in gaining more generalizability and robustness. In this repository we verify it through performing experiments on several GCN architectures with three designed self-supervised tasks: node clustering, graph partitioning and graph completion.
Dependencies
Please setup the environment following Section 3 (Setup Python environment for GPU) in this instruction, and then install the dependencies related to graph partitioning with the following commands:
sudo apt-get install libmetis-dev
pip install METIS==0.2a.4
Experiments
- GCN, GAT and GIN with self-supervision
- GMNN and GraphMix with self-supervision
- GCN with self-supervision in adversarial defense
Citation
If you use this code for you research, please cite our paper.
@article{you2020does,
title={When Does Self-Supervision Help Graph Convolutional Networks?},
author={You, Yuning and Chen, Tianlong and Wang, Zhangyang and Shen, Yang},
journal={Proceedings of machine learning research},
volume={119},
pages={10871--10880},
year={2020}
}