Awesome
Size-Invariant Graph Representations for Graph Classification Extrapolations
This repository contains the official code of the paper Size-Invariant Graph Representations for Graph Classification Extrapolations (ICML 2021 Long Talk).
<p align="center"> <img src=./camera-ready-fig-600x600.png> </p>Manual dependencies (CUDA)
- PyTorch 1.7.1
torch-cluster
1.5.8torch-geometric
1.6.3torch-scatter
2.0.5torch-sparse
0.6.8torch-spline-conv
1.2.0torch-geometric
1.6.3ray[tune]
1.1.0
Install the additional dependencies as follows:
$ pip install -r requirements.txt
Download Data
Please, run the following commands to download and set up the data folder.
$ wget https://www.dropbox.com/s/38eg3twe4dd1hbt/data.zip
$ unzip data.zip
The command above will place the data already sampled in the folder data/
.
Please specify its absolute path in base_config.yaml
.
Hypertune
The provided configurations allow you to run the hyperparameter tuning of $\Gamma_\text{GIN}$ on NCI1
.
To tune for other datasets and/or models:
- In
hyper_config.yaml
specify the hyperparameters values. For details on the range of the hyperparameter refer to the Appendix. - In
base_config.yaml
setdataset_name
toNCI1
,NCI109
,PROTEINS
orbrain-net
(i.e. schizophrenia). - In
base_config.yaml
set themodel
toKaryGNN
(i.e. $\Gamma_\text{GNN}$),KaryRPGNN
(i.e. $\Gamma_\text{RPGNN}$),GraphletCounting
(i.e. $\Gamma_\text{1-hot}$),GNN
orRPGNN
. You can specify the GNN ingnn_type
aspna
,gcn
orgin
and the XU-READOUT ingraph_pooling
asmean
,max
orsum
.
Run
$ python hypertuning.py
Run a single configuration
The provided configurations allow you to run $\Gamma_\text{GNN}$ on NCI1
with the best hyperparameters.
To run for other datasets and/or models specify the parameters
in base_config.yaml
.
Run
$ python lightning_modules.py
Credits
If you use this code, please cite
@inproceedings{bevilacqua2021size,
title={Size-invariant graph representations for graph classification extrapolations},
author={Bevilacqua, Beatrice and Zhou, Yangze and Ribeiro, Bruno},
booktitle={International Conference on Machine Learning},
pages={837--851},
year={2021},
organization={PMLR}
}