Awesome
GreatX: Graph Reliability Toolbox
<p align="center"> <img width = "600" height = "150" src="./imgs/greatx.png" alt="banner"/> <br/> </p> <p align="center"><strong>GreatX is great!</strong></p> <p align=center> <a href="https://greatx.readthedocs.io/en/latest/"> [Documentation] </a> | <a href="https://github.com/EdisonLeeeee/GreatX/blob/master/examples"> [Examples] </a> </p> <p align=center> <a href="https://www.python.org/downloads/release/python-370/"> <img src="https://img.shields.io/badge/Python->=3.7-3776AB?logo=python" alt="Python"> </a> <a href="https://github.com/pytorch/pytorch"> <img src="https://img.shields.io/badge/PyTorch->=1.8-FF6F00?logo=pytorch" alt="pytorch"> </a> <a href="https://pypi.org/project/greatx/"> <img src="https://badge.fury.io/py/greatx.svg" alt="pypi"> </a> <a href="https://github.com/EdisonLeeeee/GreatX/blob/master/LICENSE"> <img src="https://img.shields.io/github/license/EdisonLeeeee/GreatX" alt="license"> </a> <a href="https://github.com/EdisonLeeeee/GreatX/blob/master/CONTRIBUTING.md"> <img src="https://img.shields.io/badge/Contributions-Welcome-278ea5" alt="Contrib"/> </a> <a href="https://greatx.readthedocs.io/en/latest"> <img src="https://readthedocs.org/projects/greatx/badge/?version=latest" alt="docs"> </a> </p>❓ What is "Reliability" on Graphs?
"Reliability" on graphs refers to robustness against the following threats:
- Inherent noise
- Distribution Shift
- Adversarial Attacks
For more details, please kindly refer to our paper Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack
💨 News
- November 2, 2022: We are planning to release GreatX 0.1.0 this month, stay tuned!
- June 30, 2022: GraphWar has been renamed to GreatX.
June 9, 2022: GraphWar v0.1.0 has been released. We also provide the documentation along with numerous examples.May 27, 2022: GraphWar has been refactored with PyTorch Geometric (PyG), old code based on DGL can be found here. We will soon release the first version of GreatX, stay tuned!
NOTE: GreatX is still in the early stages and the API will likely continue to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.
🚀 Installation
Please make sure you have installed PyTorch and PyTorch Geometric (PyG).
# Coming soon
pip install -U greatx
or
# Recommended
git clone https://github.com/EdisonLeeeee/GreatX.git && cd GreatX
pip install -e . --verbose
where -e
means "editable" mode so you don't have to reinstall every time you make changes.
⚡ Get Started
Assume that you have a torch_geometric.data.Data
instance data
that describes your graph.
How fast can we train and evaluate your own GNN?
Take GCN
as an example:
from greatx.nn.models import GCN
from greatx.training import Trainer
from torch_geometric.datasets import Planetoid
# Any PyG dataset is available!
dataset = Planetoid(root='.', name='Cora')
data = dataset[0]
model = GCN(dataset.num_features, dataset.num_classes)
trainer = Trainer(model, device='cuda:0') # or 'cpu'
trainer.fit(data, mask=data.train_mask)
trainer.evaluate(data, mask=data.test_mask)
A simple targeted manipulation attack
from greatx.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()
A simple untargeted (non-targeted) manipulation attack
from greatx.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()
👀 Implementations
In detail, the following methods are currently implemented:
⚔ Adversarial Attack
Graph Manipulation Attack (GMA)
Targeted Attack
Methods | Descriptions | Examples |
---|---|---|
RandomAttack | A simple random method that chooses edges to flip randomly. | [Example] |
DICEAttack | Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 | [Example] |
Nettack | Zügner et al. Adversarial Attacks on Neural Networks for Graph Data, KDD'18 | [Example] |
FGAttack | Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 | [Example] |
GFAttack | Chang et al. A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20 | [Example] |
IGAttack | Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 | [Example] |
SGAttack | Li et al. Adversarial Attack on Large Scale Graph, TKDE'21 | [Example] |
PGDAttack | Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 | [Example] |
Untargeted Attack
Methods | Descriptions | Examples |
---|---|---|
RandomAttack | A simple random method that chooses edges to flip randomly | [Example] |
DICEAttack | Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 | [Example] |
FGAttack | Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 | [Example] |
Metattack | Zügner et al. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19 | [Example] |
IGAttack | Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 | [Example] |
PGDAttack | Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 | [Example] |
Graph Injection Attack (GIA)
Methods | Descriptions | Examples |
---|---|---|
RandomInjection | A simple random method that chooses nodes to inject randomly. | [Example] |
AdvInjection | The 2nd place solution of KDD Cup 2020, team: ADVERSARIES. | [Example] |
Graph Universal Attack (GUA)
Graph Backdoor Attack (GBA)
Methods | Descriptions | Examples |
---|---|---|
LGCBackdoor | Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 | [Example] |
FGBackdoor | Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 | [Example] |
Enhancing Techniques or Corresponding Defense
Standard GNNs (without defense)
Supervised
Methods | Descriptions | Examples |
---|---|---|
GCN | Kipf et al. Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17 | [Example] |
SGC | Wu et al. Simplifying Graph Convolutional Networks, ICLR'19 | [Example] |
GAT | Veličković et al. Graph Attention Networks, ICLR'18 | [Example] |
DAGNN | Liu et al. Towards Deeper Graph Neural Networks, KDD'20 | [Example] |
APPNP | Klicpera et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19 | [Example] |
JKNet | Xu et al. Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18 | [Example] |
TAGCN | Du et al. Topological Adaptive Graph Convolutional Networks, arXiv'17 | [Example] |
SSGC | Zhu et al. Simple Spectral Graph Convolution, ICLR'21 | [Example] |
DGC | Wang et al. Dissecting the Diffusion Process in Linear Graph Convolutional Networks, NeurIPS'21 | [Example] |
NLGCN, NLMLP, NLGAT | Liu et al. Non-Local Graph Neural Networks, TPAMI'22 | [Example] |
SpikingGCN | Zhu et al. Spiking Graph Convolutional Networks, IJCAI'22 | [Example] |
Unsupervised/Self-supervise
Methods | Descriptions | Examples |
---|---|---|
DGI | Veličković et al. Deep Graph Infomax, ICLR'19 | [Example] |
GRACE | Zhu et al. Deep Graph Contrastive Representation Learning, ICML'20 | [Example] |
CCA-SSG | Zhang et al. From Canonical Correlation Analysis to Self-supervised Graph Neural Networks, NeurIPS'21 | [Example] |
GGD | Zheng et al. Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination, NeurIPS'22 | [Example] |
Techniques Against Adversarial Attacks
More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.
Techniques Against Inherent Noise
Methods | Descriptions | Examples |
---|---|---|
DropEdge | Rong et al. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification, ICLR'20 | [Example] |
DropNode | You et al. Graph Contrastive Learning with Augmentations, NeurIPS'20 | [Example] |
DropPath | Li et al. MaskGAE: Masked Graph Modeling Meets Graph Autoencoders, arXiv'22' | [Example] |
FeaturePropagation | Rossi et al. On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features, Log'22 | [Example] |
Miscellaneous
Methods | Descriptions | Examples |
---|---|---|
Centered Kernel Alignment (CKA) | Nguyen et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, ICLR'21 | [Example] |
❓ Known Issues
Untargeted attacks
are suffering from performance degradation, as also in DeepRobust, when a validation set is used during training with model picking. Such phenomenon has also been revealed in Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense.