Home

Awesome

BGRL_Pytorch

Implementation of Large-Scale Representation Learning on Graphs via Bootstrapping.

A PyTorch implementation of "<a href="https://arxiv.org/pdf/2102.06514.pdf">Large-Scale Representation Learning on Graphs via Bootstrapping</a>" paper, accepted in ICLR 2021 Workshop
<img src="img/model.PNG" width="700px"></img>

Hyperparameters for training BGRL

Following Options can be passed to train.py

--layers: or -l: one or more integer values specifying the number of units for each GNN layer. Default is 512 256.
usage example :--layers 512 256

--aug_params: or -p: four float values specifying the hyperparameters for graph augmentation (p_f1, p_f2, p_e1, p_e2). Default is 0.2 0.1 0.2 0.3.
usage example : --aug_params 0.2 0.1 0.2 0.3

paramsWikiCSAm.ComputersAm.PhotosCo.CSCo.Physics
p_f10.20.20.10.30.1
p_f20.10.10.20.40.4
p_e10.20.50.40.30.4
p_e20.30.40.10.20.1
embedding size256128256256128
encoder hidden size512256512512256
predictor hidden size512512512512512

Experimental Results

WikiCSAm.ComputersAm.PhotosCo.CSCo.Physics
79.5088.2192.7692.4994.89

Codes borrowed from

Codes are borrowed from BYOL and SelfGNN

nameImplementation CodePaper
Bootstrap Your Own Latent<a href="https://github.com/lucidrains/byol-pytorch">Implementation</a><a href="https://arxiv.org/pdf/2006.07733.pdf">paper</a>
SelfGNN<a href="https://github.com/zekarias-tilahun/SelfGNN">Implementation</a><a href="https://arxiv.org/pdf/2103.14958.pdf">paper</a>