Home

Awesome

<h1 align="center"> Discovering Invariant Rationales for Graph Neural Networks 🔥 </h1> <div align="center">

</div>

Overview

DIR (ICLR 2022) aims to train intrinsic interpretable Graph Neural Networks that are robust and generalizable to out-of-distribution datasets. The core of this work lies in the construction of interventional distributions, from which causal features are identified. See the quick lead-in below.

Installation

Note that we require 1.7.0 <= torch_geometric <= 2.0.2. Simple run the cmd to install the python environment (you may want to change cudatoolkit accordingly based on your cuda version) or see requirements.txt for the packages.

sh setup_env.sh
conda activate dir
<!-- - Main packages: PyTorch >= 1.5.0, 2.0.2 >= Pytorch Geometric >= 1.7.0, OGB >= 1.3.0. - See `requirements.txt` for other packages. -->

Data download

Run DIR

The hyper-parameters used to train the intrinsic interpretable models are set as default in the argparse.ArgumentParser in the training files. Feel free to change them if needed. We use separate files to train each dataset.

Simply run python -m train.{dataset}_dir to reproduce the results in the paper.

Common Questions:

How does the Rationale Generator update its parameters?: https://github.com/Wuyxin/DIR-GNN/issues/7

Reference

@inproceedings{
    wu2022dir,
    title={Discovering Invariant Rationales for Graph Neural Networks},
    author={Ying-Xin Wu and Xiang Wang and An Zhang and Xiangnan He and Tat-seng Chua},
    booktitle={ICLR},
    year={2022},
}