Home

Awesome

GreatX: Graph Reliability Toolbox

<p align="center"> <img width = "600" height = "150" src="./imgs/greatx.png" alt="banner"/> <br/> </p> <p align="center"><strong>GreatX is great!</strong></p> <p align=center> <a href="https://greatx.readthedocs.io/en/latest/"> [Documentation] </a> | <a href="https://github.com/EdisonLeeeee/GreatX/blob/master/examples"> [Examples] </a> </p> <p align=center> <a href="https://www.python.org/downloads/release/python-370/"> <img src="https://img.shields.io/badge/Python->=3.7-3776AB?logo=python" alt="Python"> </a> <a href="https://github.com/pytorch/pytorch"> <img src="https://img.shields.io/badge/PyTorch->=1.8-FF6F00?logo=pytorch" alt="pytorch"> </a> <a href="https://pypi.org/project/greatx/"> <img src="https://badge.fury.io/py/greatx.svg" alt="pypi"> </a> <a href="https://github.com/EdisonLeeeee/GreatX/blob/master/LICENSE"> <img src="https://img.shields.io/github/license/EdisonLeeeee/GreatX" alt="license"> </a> <a href="https://github.com/EdisonLeeeee/GreatX/blob/master/CONTRIBUTING.md"> <img src="https://img.shields.io/badge/Contributions-Welcome-278ea5" alt="Contrib"/> </a> <a href="https://greatx.readthedocs.io/en/latest"> <img src="https://readthedocs.org/projects/greatx/badge/?version=latest" alt="docs"> </a> </p>

❓ What is "Reliability" on Graphs?

threats

"Reliability" on graphs refers to robustness against the following threats:

For more details, please kindly refer to our paper Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack

💨 News

NOTE: GreatX is still in the early stages and the API will likely continue to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.

🚀 Installation

Please make sure you have installed PyTorch and PyTorch Geometric (PyG).

# Coming soon
pip install -U greatx

or

# Recommended
git clone https://github.com/EdisonLeeeee/GreatX.git && cd GreatX
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

⚡ Get Started

Assume that you have a torch_geometric.data.Data instance data that describes your graph.

How fast can we train and evaluate your own GNN?

Take GCN as an example:

from greatx.nn.models import GCN
from greatx.training import Trainer
from torch_geometric.datasets import Planetoid
# Any PyG dataset is available!
dataset = Planetoid(root='.', name='Cora')
data = dataset[0]
model = GCN(dataset.num_features, dataset.num_classes)
trainer = Trainer(model, device='cuda:0') # or 'cpu'
trainer.fit(data, mask=data.train_mask)
trainer.evaluate(data, mask=data.test_mask)

A simple targeted manipulation attack

from greatx.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

A simple untargeted (non-targeted) manipulation attack

from greatx.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

👀 Implementations

In detail, the following methods are currently implemented:

⚔ Adversarial Attack

Graph Manipulation Attack (GMA)

Targeted Attack

MethodsDescriptionsExamples
RandomAttackA simple random method that chooses edges to flip randomly.[Example]
DICEAttackWaniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16[Example]
NettackZügner et al. Adversarial Attacks on Neural Networks for Graph Data, KDD'18[Example]
FGAttackChen et al. Fast Gradient Attack on Network Embedding, arXiv'18[Example]
GFAttackChang et al. A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20[Example]
IGAttackWu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19[Example]
SGAttackLi et al. Adversarial Attack on Large Scale Graph, TKDE'21[Example]
PGDAttackXu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19[Example]

Untargeted Attack

MethodsDescriptionsExamples
RandomAttackA simple random method that chooses edges to flip randomly[Example]
DICEAttackWaniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16[Example]
FGAttackChen et al. Fast Gradient Attack on Network Embedding, arXiv'18[Example]
MetattackZügner et al. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19[Example]
IGAttackWu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19[Example]
PGDAttackXu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19[Example]

Graph Injection Attack (GIA)

MethodsDescriptionsExamples
RandomInjectionA simple random method that chooses nodes to inject randomly.[Example]
AdvInjectionThe 2nd place solution of KDD Cup 2020, team: ADVERSARIES.[Example]

Graph Universal Attack (GUA)

Graph Backdoor Attack (GBA)

MethodsDescriptionsExamples
LGCBackdoorChen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22[Example]
FGBackdoorChen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22[Example]

Enhancing Techniques or Corresponding Defense

Standard GNNs (without defense)

Supervised

MethodsDescriptionsExamples
GCNKipf et al. Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17[Example]
SGCWu et al. Simplifying Graph Convolutional Networks, ICLR'19[Example]
GATVeličković et al. Graph Attention Networks, ICLR'18[Example]
DAGNNLiu et al. Towards Deeper Graph Neural Networks, KDD'20[Example]
APPNPKlicpera et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19[Example]
JKNetXu et al. Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18[Example]
TAGCNDu et al. Topological Adaptive Graph Convolutional Networks, arXiv'17[Example]
SSGCZhu et al. Simple Spectral Graph Convolution, ICLR'21[Example]
DGCWang et al. Dissecting the Diffusion Process in Linear Graph Convolutional Networks, NeurIPS'21[Example]
NLGCN, NLMLP, NLGATLiu et al. Non-Local Graph Neural Networks, TPAMI'22[Example]
SpikingGCNZhu et al. Spiking Graph Convolutional Networks, IJCAI'22[Example]

Unsupervised/Self-supervise

MethodsDescriptionsExamples
DGIVeličković et al. Deep Graph Infomax, ICLR'19[Example]
GRACEZhu et al. Deep Graph Contrastive Representation Learning, ICML'20[Example]
CCA-SSGZhang et al. From Canonical Correlation Analysis to Self-supervised Graph Neural Networks, NeurIPS'21[Example]
GGDZheng et al. Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination, NeurIPS'22[Example]

Techniques Against Adversarial Attacks

MethodsDescriptionsExamples
MedianGCNChen et al. Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21[Example]
RobustGCNZhu et al. Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19[Example]
SoftMedianGCNGeisler et al. Reliable Graph Neural Networks via Robust Aggregation, NeurIPS'20<br>Geisler et al. Robustness of Graph Neural Networks at Scale, NeurIPS'21[Example]
ElasticGNNLiu et al. Elastic Graph Neural Networks, ICML'21[Example]
AirGNNLiu et al. Graph Neural Networks with Adaptive Residual, NeurIPS'21[Example]
SimPGCNJin et al. Node Similarity Preserving Graph Convolutional Networks, WSDM'21[Example]
SATLi et al. Spectral Adversarial Training for Robust Graph Neural Network, arXiv'22[Example]
JaccardPurificationWu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19[Example]
SVDPurificationEntezari et al. All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, WSDM'20[Example]
GNNGUARDZhang et al. GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks, NeurIPS'20[Example]
GUARDLi et al. GUARD: Graph Universal Adversarial Defense, arXiv'22[Example]
RTGCNWu et al. Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation, KDD'22[Example]

More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.

Techniques Against Inherent Noise

MethodsDescriptionsExamples
DropEdgeRong et al. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification, ICLR'20[Example]
DropNodeYou et al. Graph Contrastive Learning with Augmentations, NeurIPS'20[Example]
DropPathLi et al. MaskGAE: Masked Graph Modeling Meets Graph Autoencoders, arXiv'22'[Example]
FeaturePropagationRossi et al. On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features, Log'22[Example]

Miscellaneous

MethodsDescriptionsExamples
Centered Kernel Alignment (CKA)Nguyen et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, ICLR'21[Example]

❓ Known Issues