Awesome
⚔🛡 Awesome Graph Adversarial Learning
<img src="https://img.shields.io/badge/Contributions-Welcome-278ea5" alt="Contrib"/> <img src="https://img.shields.io/badge/Number%20of%20Papers-416-FF6F00" alt="PaperNum"/>
<a class="toc" id="table-of-contents"></a>
- ⚔🛡 Awesome Graph Adversarial Learning
- 👀Quick Look
- ⚔Attack
- 🛡Defense
- 🔐Certification
- ⚖Stability
- 🚀Others
- 📃Survey
- ⚙Toolbox
- 🔗Resource
This repository contains Attack-related papers, Defense-related papers, Robustness Certification papers, etc., ranging from 2017 to 2021. If you find this repo useful, please cite: A Survey of Adversarial Learning on Graph, arXiv'20, Link
@article{chen2020survey,
title={A Survey of Adversarial Learning on Graph},
author={Chen, Liang and Li, Jintang and Peng, Jiaying and Xie,
Tao and Cao, Zengxu and Xu, Kun and He,
Xiangnan and Zheng, Zibin and Wu, Bingzhe},
journal={arXiv preprint arXiv:2003.05730},
year={2020}
}
👀Quick Look
The papers in this repo are categorized or sorted:
| By Alphabet | By Year | By Venue | Papers with Code |
If you want to get a quick look at the recently updated papers in the repository (in 30 days), you can refer to 📍this.
⚔Attack
2023
- Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective, 📝ICLR, :octocat:Code
- Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning, 📝AAAI, :octocat:Code
- GUAP: Graph Universal Attack Through Adversarial Patching, 📝arXiv, :octocat:Code
- Node Injection for Class-specific Network Poisoning, 📝arXiv, :octocat:Code
- Unnoticeable Backdoor Attacks on Graph Neural Networks, 📝WWW, :octocat:Code
- A semantic backdoor attack against Graph Convolutional Networks, 📝arXiv
2022
- Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem, 📝WSDM, :octocat:Code
- Inference Attacks Against Graph Neural Networks, 📝USENIX Security, :octocat:Code
- Model Stealing Attacks Against Inductive Graph Neural Networks, 📝IEEE Symposium on Security and Privacy, :octocat:Code
- Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, 📝WWW, :octocat:Code
- Neighboring Backdoor Attacks on Graph Convolutional Network, 📝arXiv, :octocat:Code
- Understanding and Improving Graph Injection Attack by Promoting Unnoticeability, 📝ICLR, :octocat:Code
- Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs, 📝AAAI, :octocat:Code
- More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks, 📝arXiv
- Black-box Node Injection Attack for Graph Neural Networks, 📝arXiv, :octocat:Code
- Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection, 📝arXiv
- Projective Ranking-based GNN Evasion Attacks, 📝arXiv
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation, 📝arXiv
- Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization, 📝Asia CCS, :octocat:Code
- Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees, 📝CVPR, :octocat:Code
- Transferable Graph Backdoor Attack, 📝RAID, :octocat:Code
- Adversarial Robustness of Graph-based Anomaly Detection, 📝arXiv
- Label specificity attack: Change your label as I want, 📝IJIS
- AdverSparse: An Adversarial Attack Framework for Deep Spatial-Temporal Graph Neural Networks, 📝ICASSP
- Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks, 📝WSDM
- Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors, 📝IJCAI, :octocat:Code
- Label-Only Membership Inference Attack against Node-Level Graph Neural NetworksCluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors, 📝arXiv
- Adversarial Camouflage for Node Injection Attack on Graphs, 📝arXiv
- Are Gradients on Graph Structure Reliable in Gray-box Attacks?, 📝CIKM, :octocat:Code
- Adversarial Camouflage for Node Injection Attack on Graphs, 📝arXiv
- Graph Structural Attack by Perturbing Spectral Distance, 📝KDD
- What Does the Gradient Tell When Attacking the Graph Structure, 📝arXiv
- BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection, 📝ICDM, :octocat:Code
- Model Inversion Attacks against Graph Neural Networks, 📝TKDE
- Sparse Vicious Attacks on Graph Neural Networks, 📝arXiv, :octocat:Code
- Poisoning GNN-based Recommender Systems with Generative Surrogate-based Attacks, 📝ACM TIS
- Dealing with the unevenness: deeper insights in graph-based attack and defense, 📝Machine Learning
- Membership Inference Attacks Against Robust Graph Neural Network, 📝CSS
- Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks, 📝ICDM, :octocat:Code
- Revisiting Item Promotion in GNN-based Collaborative Filtering: A Masked Targeted Topological Attack Perspective, 📝arXiv
- Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection, 📝arXiv, :octocat:Code
- Private Graph Extraction via Feature Explanations, 📝arXiv
- Towards Secrecy-Aware Attacks Against Trust Prediction in Signed Graphs, 📝arXiv
- Camouflaged Poisoning Attack on Graph Neural Networks, 📝ICDM
- LOKI: A Practical Data Poisoning Attack Framework against Next Item Recommendations, 📝TKDE
- Adversarial for Social Privacy: A Poisoning Strategy to Degrade User Identity Linkage, 📝arXiv
- Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification, 📝Pattern Recognition
- GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections, 📝arXiv, :octocat:Code
- Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs, 📝arXiv
- Are Defenses for Graph Neural Networks Robust?, 📝NeurIPS, :octocat:Code
- Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation, 📝ECCV
- Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models, 📝NeurIPS
- Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias, 📝NeurIPS, :octocat:Code
- Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks, 📝SecureComm
2021
- Stealing Links from Graph Neural Networks, 📝USENIX Security
- PATHATTACK: Attacking Shortest Paths in Complex Networks, 📝arXiv
- Structack: Structure-based Adversarial Attacks on Graph Neural Networks, 📝ACM Hypertext, :octocat:Code
- Optimal Edge Weight Perturbations to Attack Shortest Paths, 📝arXiv
- GReady for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack, 📝Information Sciences
- Graph Adversarial Attack via Rewiring, 📝KDD, :octocat:Code
- Membership Inference Attack on Graph Neural Networks, 📝arXiv
- Graph Backdoor, 📝USENIX Security
- TDGIA: Effective Injection Attacks on Graph Neural Networks, 📝KDD, :octocat:Code
- Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge, 📝arXiv
- Adversarial Attack on Large Scale Graph, 📝TKDE, :octocat:Code
- Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense, 📝arXiv
- Joint Detection and Localization of Stealth False Data Injection Attacks in Smart Grids using Graph Neural Networks, 📝arXiv
- Universal Spectral Adversarial Attacks for Deformable Shapes, 📝CVPR
- SAGE: Intrusion Alert-driven Attack Graph Extractor, 📝KDD Workshop, :octocat:Code
- Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models, 📝arXiv, :octocat:Code
- VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning, 📝PAKDD, :octocat:Code
- Explainability-based Backdoor Attacks Against Graph Neural Networks, 📝WiseML@WiSec
- GraphAttacker: A General Multi-Task GraphAttack Framework, 📝arXiv, :octocat:Code
- Attacking Graph Neural Networks at Scale, 📝AAAI workshop
- Node-Level Membership Inference Attacks Against Graph Neural Networks, 📝arXiv
- Reinforcement Learning For Data Poisoning on Graph Neural Networks, 📝arXiv
- DeHiB: Deep Hidden Backdoor Attack on Semi-Supervised Learning via Adversarial Perturbation, 📝AAAI
- Graphfool: Targeted Label Adversarial Attack on Graph Embedding, 📝arXiv
- Towards Revealing Parallel Adversarial Attack on Politician Socialnet of Graph Structure, 📝Security and Communication Networks
- Network Embedding Attack: An Euclidean Distance Based Method, 📝MDATA
- Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation, 📝arXiv
- Jointly Attacking Graph Neural Network and its Explanations, 📝arXiv
- Graph Stochastic Neural Networks for Semi-supervised Learning, 📝arXiv, :octocat:Code
- Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings, 📝arXiv, :octocat:Code
- Single-Node Attack for Fooling Graph Neural Networks, 📝KDD Workshop, :octocat:Code
- The Robustness of Graph k-shell Structure under Adversarial Attacks, 📝arXiv
- Poisoning Knowledge Graph Embeddings via Relation Inference Patterns, 📝ACL, :octocat:Code
- A Hard Label Black-box Adversarial Attack Against Graph Neural Networks, 📝CCS
- GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking, 📝DATE Conference
- Single Node Injection Attack against Graph Neural Networks, 📝CIKM, :octocat:Code
- Spatially Focused Attack against Spatiotemporal Graph Neural Networks, 📝arXiv
- Derivative-free optimization adversarial attacks for graph convolutional networks, 📝PeerJ
- Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks, 📝CIKM
- Time-aware Gradient Attack on Dynamic Network Link Prediction, 📝TKDE
- Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning, 📝arXiv
- Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications, 📝ICDM, :octocat:Code
- Watermarking Graph Neural Networks based on Backdoor Attacks, 📝arXiv
- Robustness of Graph Neural Networks at Scale, 📝NeurIPS, :octocat:Code
- Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness, 📝NeurIPS
- Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models, 📝IJCAI, :octocat:Code
- Adversarial Attacks on Graph Classification via Bayesian Optimisation, 📝NeurIPS, :octocat:Code
- Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods, 📝EMNLP, :octocat:Code
- COREATTACK: Breaking Up the Core Structure of Graphs, 📝arXiv
- UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction, 📝ICCAD, :octocat:Code
- GraphMI: Extracting Private Graph Data from Graph Neural Networks, 📝IJCAI, :octocat:Code
- Structural Attack against Graph Based Android Malware Detection, 📝CCS
- Adversarial Attack against Cross-lingual Knowledge Graph Alignment, 📝EMNLP
- FHA: Fast Heuristic Attack Against Graph Convolutional Networks, 📝ICDS
- Task and Model Agnostic Adversarial Attack on Graph Neural Networks, 📝arXiv
- How Members of Covert Networks Conceal the Identities of Their Leaders, 📝ACM TIST
- Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification, 📝arXiv
2020
- A Graph Matching Attack on Privacy-Preserving Record Linkage, 📝CIKM
- Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection, 📝arXiv
- Adaptive Adversarial Attack on Graph Embedding via GAN, 📝SocialSec
- Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers, 📝arXiv
- One Vertex Attack on Graph Neural Networks-based Spatiotemporal Forecasting, 📝ICLR OpenReview
- Near-Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem, 📝ICLR OpenReview
- Adversarial Attacks on Deep Graph Matching, 📝NeurIPS
- Attacking Graph-Based Classification without Changing Existing Connections, 📝ACSAC
- Cross Entropy Attack on Deep Graph Infomax, 📝IEEE ISCAS
- Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation, 📝ICLR, :octocat:Code
- Towards More Practical Adversarial Attacks on Graph Neural Networks, 📝NeurIPS, :octocat:Code
- Adversarial Label-Flipping Attack and Defense for Graph Neural Networks, 📝ICDM, :octocat:Code
- Exploratory Adversarial Attacks on Graph Neural Networks, 📝ICDM, :octocat:Code
- A Targeted Universal Attack on Graph Convolutional Network, 📝arXiv, :octocat:Code
- Query-free Black-box Adversarial Attacks on Graphs, 📝arXiv
- Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs, 📝arXiv
- Efficient Evasion Attacks to Graph Neural Networks via Influence Function, 📝arXiv
- Backdoor Attacks to Graph Neural Networks, 📝SACMAT, :octocat:Code
- Link Prediction Adversarial Attack Via Iterative Gradient Attack, 📝IEEE Trans
- Adversarial Attack on Hierarchical Graph Pooling Neural Networks, 📝arXiv
- Adversarial Attack on Community Detection by Hiding Individuals, 📝WWW, :octocat:Code
- Manipulating Node Similarity Measures in Networks, 📝AAMAS
- A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models, 📝AAAI, :octocat:Code
- Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks, 📝BigData
- Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach, 📝WWW
- An Efficient Adversarial Attack on Graph Structured Data, 📝IJCAI Workshop
- Practical Adversarial Attacks on Graph Neural Networks, 📝ICML Workshop
- Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns, 📝TKDD
- Adversarial Attacks on Link Prediction Algorithms Based on Graph Neural Networks, 📝Asia CCS
- Scalable Attack on Graph Data by Injecting Vicious Nodes, 📝ECML-PKDD, :octocat:Code
- Attackability Characterization of Adversarial Evasion Attack on Discrete Data, 📝KDD
- MGA: Momentum Gradient Attack on Network, 📝arXiv
- Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria, 📝arXiv
- Adversarial Perturbations of Opinion Dynamics in Networks, 📝arXiv
- Network disruption: maximizing disagreement and polarization in social networks, 📝arXiv, :octocat:Code
- Adversarial attack on BC classification for scale-free networks, 📝AIP Chaos
2019
- Attacking Graph Convolutional Networks via Rewiring, 📝arXiv
- Unsupervised Euclidean Distance Attack on Network Embedding, 📝arXiv
- Structured Adversarial Attack Towards General Implementation and Better Interpretability, 📝ICLR, :octocat:Code
- Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling, 📝arXiv
- Vertex Nomination, Consistent Estimation, and Adversarial Modification, 📝arXiv
- PeerNets Exploiting Peer Wisdom Against Adversarial Attacks, 📝ICLR, :octocat:Code
- Network Structural Vulnerability A Multi-Objective Attacker Perspective, 📝IEEE Trans
- Multiscale Evolutionary Perturbation Attack on Community Detection, 📝arXiv
- αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model, 📝CIKM
- Adversarial Attacks on Node Embeddings via Graph Poisoning, 📝ICML, :octocat:Code
- GA Based Q-Attack on Community Detection, 📝TCSS
- Data Poisoning Attack against Knowledge Graph Embedding, 📝IJCAI
- Adversarial Attacks on Graph Neural Networks via Meta Learning, 📝ICLR, :octocat:Code
- Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, 📝IJCAI, :octocat:Code
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, 📝IJCAI, :octocat:Code
- A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning, 📝NeurIPS, :octocat:Code
- Attacking Graph-based Classification via Manipulating the Graph Structure, 📝CCS
2018
- Fake Node Attacks on Graph Convolutional Networks, 📝arXiv
- Data Poisoning Attack against Unsupervised Node Embedding Methods, 📝arXiv
- Fast Gradient Attack on Network Embedding, 📝arXiv
- Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network, 📝arXiv
- Adversarial Attacks on Neural Networks for Graph Data, 📝KDD, :octocat:Code
- Hiding Individuals and Communities in a Social Network, 📝Nature Human Behavior
- Attacking Similarity-Based Link Prediction in Social Networks, 📝AAMAS
- Adversarial Attack on Graph Structured Data, 📝ICML, :octocat:Code
2017
- Practical Attacks Against Graph-based Clustering, 📝CCS
- Adversarial Sets for Regularising Neural Link Predictors, 📝UAI, :octocat:Code
🛡Defense
2023
- Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions, 📝NeurIPS, :octocat:Code
- ASGNN: Graph Neural Networks with Adaptive Structure, 📝ICLR OpenReview
- Empowering Graph Representation Learning with Test-Time Graph Transformation, 📝ICLR, :octocat:Code
- Robust Training of Graph Neural Networks via Noise Governance, 📝WSDM, :octocat:Code
- Self-Supervised Graph Structure Refinement for Graph Neural Networks, 📝WSDM, :octocat:Code
- Revisiting Robustness in Graph Machine Learning, 📝ICLR, :octocat:Code
- Robust Mid-Pass Filtering Graph Convolutional Networks, 📝WWW
- Towards Robust Graph Neural Networks via Adversarial Contrastive Learning, 📝BigData
2022
- Unsupervised Adversarially-Robust Representation Learning on Graphs, 📝AAAI, :octocat:Code
- Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels, 📝WSDM, :octocat:Code
- Mind Your Solver! On Adversarial Attack and Defense for Combinatorial Optimization, 📝arXiv, :octocat:Code
- Learning Robust Representation through Graph Adversarial Contrastive Learning, 📝arXiv
- GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks, 📝arXiv
- Graph Neural Network for Local Corruption Recovery, 📝arXiv, :octocat:Code
- Robust Heterogeneous Graph Neural Networks against Adversarial Attacks, 📝AAAI
- How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks?, 📝Neural Processing Letters
- Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision, 📝AAAI, :octocat:Code
- SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation, 📝WWW, :octocat:Code
- Exploring High-Order Structure for Robust Graph Structure Learning, 📝arXiv
- GUARD: Graph Universal Adversarial Defense, 📝arXiv, :octocat:Code
- Detecting Topology Attacks against Graph Neural Networks, 📝arXiv
- LPGNet: Link Private Graph Networks for Node Classification, 📝arXiv
- EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks, 📝arXiv
- Bayesian Robust Graph Contrastive Learning, 📝arXiv, :octocat:Code
- Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN, 📝KDD, :octocat:Code
- Robust Graph Representation Learning for Local Corruption Recovery, 📝ICML workshop
- Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond, 📝CVPR, :octocat:Code
- Large-Scale Privacy-Preserving Network Embedding against Private Link Inference Attacks, 📝arXiv
- Robust Graph Neural Networks via Ensemble Learning, 📝Mathematics
- AN-GCN: An Anonymous Graph Convolutional Network Against Edge-Perturbing Attacks, 📝IEEE TNNLS
- How does Heterophily Impact Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications, 📝KDD, :octocat:Code
- Robust Graph Neural Networks using Weighted Graph Laplacian, 📝SPCOM, :octocat:Code
- ARIEL: Adversarial Graph Contrastive Learning, 📝arXiv·
- Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation, 📝KDD, :octocat:Code
- NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs, 📝arXiv
- Robust Node Classification on Graphs: Jointly from Bayesian Label Transition and Topology-based Label Propagation, 📝CIKM, :octocat:Code
- On the Robustness of Graph Neural Diffusion to Topology Perturbations, 📝NeurIPS, :octocat:Code
- IoT-based Android Malware Detection Using Graph Neural Network With Adversarial Defense, 📝IEEE IOT
- Robust cross-network node classification via constrained graph mutual information, 📝KBS
- Defending Against Backdoor Attack on Graph Nerual Network by Explainability, 📝arXiv
- Towards an Optimal Asymmetric Graph Structure for Robust Semi-supervised Node Classification, 📝KDD
- FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification, 📝arXiv
- EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks, 📝NeurIPS, :octocat:Code
- Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation, 📝ECML-PKDD
- Spectral Adversarial Training for Robust Graph Neural Network, 📝TKDE, :octocat:Code
- On the Vulnerability of Graph Learning based Collaborative Filtering, 📝TIS
- GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks, 📝LoG, :octocat:Code
- You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets, 📝LoG, :octocat:Code
- Robust Graph Representation Learning via Predictive Coding, 📝arXiv
- FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification, 📝arXiv
2021
- Learning to Drop: Robust Graph Neural Network via Topological Denoising, 📝WSDM, :octocat:Code
- How effective are Graph Neural Networks in Fraud Detection for Network Data?, 📝arXiv
- Graph Sanitation with Application to Node Classification, 📝arXiv
- Understanding Structural Vulnerability in Graph Convolutional Networks, 📝IJCAI, :octocat:Code
- A Robust and Generalized Framework for Adversarial Graph Embedding, 📝arXiv, :octocat:Code
- Integrated Defense for Resilient Graph Matching, 📝ICML
- Unveiling Anomalous Nodes Via Random Sampling and Consensus on Graphs, 📝ICASSP
- Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination, 📝WWW
- Information Obfuscation of Graph Neural Network, 📝ICML, :octocat:Code
- Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs, 📝arXiv
- On Generalization of Graph Autoencoders with Adversarial Training, 📝ECML
- DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs, 📝ECML
- Elastic Graph Neural Networks, 📝ICML, :octocat:Code
- Robust Counterfactual Explanations on Graph Neural Networks, 📝arXiv
- Node Similarity Preserving Graph Convolutional Networks, 📝WSDM, :octocat:Code
- Enhancing Robustness and Resilience of Multiplex Networks Against Node-Community Cascading Failures, 📝IEEE TSMC
- NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data, 📝TKDE, :octocat:Code
- Robust Graph Learning Under Wasserstein Uncertainty, 📝arXiv
- Towards Robust Graph Contrastive Learning, 📝arXiv
- Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks, 📝ICML
- UAG: Uncertainty-Aware Attention Graph Neural Network for Defending Adversarial Attacks, 📝AAAI
- Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks, 📝AAAI
- Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering, 📝AAAI, :octocat:Code
- Personalized privacy protection in social networks through adversarial modeling, 📝AAAI
- Interpretable Stability Bounds for Spectral Graph Filters, 📝arXiv
- Randomized Generation of Adversary-Aware Fake Knowledge Graphs to Combat Intellectual Property Theft, 📝AAAI
- Unified Robust Training for Graph NeuralNetworks against Label Noise, 📝arXiv
- An Introduction to Robust Graph Convolutional Networks, 📝arXiv
- E-GraphSAGE: A Graph Neural Network based Intrusion Detection System, 📝arXiv
- Spatio-Temporal Sparsification for General Robust Graph Convolution Networks, 📝arXiv
- Robust graph convolutional networks with directional graph adversarial training, 📝Applied Intelligence
- Detection and Defense of Topological Adversarial Attacks on Graphs, 📝AISTATS
- Unveiling the potential of Graph Neural Networks for robust Intrusion Detection, 📝arXiv, :octocat:Code
- Adversarial Robustness of Probabilistic Network Embedding for Link Prediction, 📝arXiv
- EGC2: Enhanced Graph Classification with Easy Graph Compression, 📝arXiv
- LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis, 📝arXiv
- Structure-Aware Hierarchical Graph Pooling using Information Bottleneck, 📝IJCNN
- Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights, 📝arXiv
- CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph, 📝arXiv
- Releasing Graph Neural Networks with Differential Privacy Guarantees, 📝arXiv
- Speedup Robust Graph Structure Learning with Low-Rank Information, 📝CIKM
- A Lightweight Metric Defence Strategy for Graph Neural Networks Against Poisoning Attacks, 📝ICICS, :octocat:Code
- Node Feature Kernels Increase Graph Convolutional Network Robustness, 📝arXiv, :octocat:Code
- On the Relationship between Heterophily and Robustness of Graph Neural Networks, 📝arXiv
- Distributionally Robust Semi-Supervised Learning Over Graphs, 📝ICLR
- Robustness of Graph Neural Networks at Scale, 📝NeurIPS, :octocat:Code
- Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation, 📝arXiv
- Not All Low-Pass Filters are Robust in Graph Convolutional Networks, 📝NeurIPS, :octocat:Code
- Towards Robust Reasoning over Knowledge Graphs, 📝arXiv
- Robust Graph Neural Networks via Probabilistic Lipschitz Constraints, 📝arXiv
- Graph Neural Networks with Adaptive Residual, 📝NeurIPS, :octocat:Code
- Graph-based Adversarial Online Kernel Learning with Adaptive Embedding, 📝ICDM
- Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification, 📝NeurIPS, :octocat:Code
- Graph Neural Networks with Feature and Structure Aware Random Walk, 📝arXiv
- Topological Relational Learning on Graphs, 📝NeurIPS, :octocat:Code
2020
- Ricci-GNN: Defending Against Structural Attacks Through a Geometric Approach, 📝ICLR OpenReview
- Provable Overlapping Community Detection in Weighted Graphs, 📝NeurIPS
- Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings, 📝NeurIPS, :octocat:Code
- Graph Random Neural Networks for Semi-Supervised Learning on Graphs, 📝NeurIPS, :octocat:Code
- Reliable Graph Neural Networks via Robust Aggregation, 📝NeurIPS, :octocat:Code
- Towards Robust Graph Neural Networks against Label Noise, 📝ICLR OpenReview
- Graph Adversarial Networks: Protecting Information against Adversarial Attacks, 📝ICLR OpenReview, :octocat:Code
- A Novel Defending Scheme for Graph-Based Classification Against Graph Structure Manipulating Attack, 📝SocialSec
- Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings, 📝NeurIPS, :octocat:Code
- Node Copying for Protection Against Graph Neural Network Topology Attacks, 📝arXiv
- Community detection in sparse time-evolving graphs with a dynamical Bethe-Hessian, 📝NeurIPS
- A Feature-Importance-Aware and Robust Aggregator for GCN, 📝CIKM, :octocat:Code
- Anti-perturbation of Online Social Networks by Graph Label Transition, 📝arXiv
- Graph Information Bottleneck, 📝NeurIPS, :octocat:Code
- Adversarial Detection on Graph Structured Data, 📝PPMLP
- Graph Contrastive Learning with Augmentations, 📝NeurIPS, :octocat:Code
- Learning Graph Embedding with Adversarial Training Methods, 📝IEEE Transactions on Cybernetics
- I-GCN: Robust Graph Convolutional Network via Influence Mechanism, 📝arXiv
- Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks, 📝AAAI
- Smoothing Adversarial Training for GNN, 📝IEEE TCSS
- Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks, 📝None, :octocat:Code
- RoGAT: a robust GNN combined revised GAT with adjusted graphs, 📝arXiv
- ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on Attributed Networks, 📝arXiv
- Adversarial Perturbations of Opinion Dynamics in Networks, 📝arXiv
- Adversarial Privacy Preserving Graph Embedding against Inference Attack, 📝arXiv, :octocat:Code
- Robust Graph Learning From Noisy Data, 📝IEEE Trans
- GNNGuard: Defending Graph Neural Networks against Adversarial Attacks, 📝NeurIPS, :octocat:Code
- Transferring Robustness for Graph Neural Network Against Poisoning Attacks, 📝WSDM, :octocat:Code
- All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, 📝WSDM, :octocat:Code
- How Robust Are Graph Neural Networks to Structural Noise?, 📝DLGMA
- Robust Detection of Adaptive Spammers by Nash Reinforcement Learning, 📝KDD, :octocat:Code
- Graph Structure Learning for Robust Graph Neural Networks, 📝KDD, :octocat:Code
- On The Stability of Polynomial Spectral Graph Filters, 📝ICASSP, :octocat:Code
- On the Robustness of Cascade Diffusion under Node Attacks, 📝WWW, :octocat:Code
- Friend or Faux: Graph-Based Early Detection of Fake Accounts on Social Networks, 📝WWW
- Towards an Efficient and General Framework of Robust Training for Graph Neural Networks, 📝ICASSP
- Robust Graph Representation Learning via Neural Sparsification, 📝ICML
- Robust Training of Graph Convolutional Networks via Latent Perturbation, 📝ECML-PKDD
- Robust Collective Classification against Structural Attacks, 📝Preprint
- Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters, 📝CIKM, :octocat:Code
- Topological Effects on Attacks Against Vertex Classification, 📝arXiv
- Tensor Graph Convolutional Networks for Multi-relational and Robust Learning, 📝arXiv
- DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder, 📝arXiv, :octocat:Code
- Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning, 📝arXiv
- AANE: Anomaly Aware Network Embedding For Anomalous Link Detection, 📝ICDM
- Provably Robust Node Classification via Low-Pass Message Passing, 📝ICDM
- Graph-Revised Convolutional Network, 📝ECML-PKDD, :octocat:Code
2019
- Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure, 📝TKDE, :octocat:Code
- Bayesian graph convolutional neural networks for semi-supervised classification, 📝AAAI, :octocat:Code
- Target Defense Against Link-Prediction-Based Attacks via Evolutionary Perturbations, 📝arXiv
- Examining Adversarial Learning against Graph-based IoT Malware Detection Systems, 📝arXiv
- Adversarial Embedding: A robust and elusive Steganography and Watermarking technique, 📝arXiv
- Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning, 📝arXiv, :octocat:Code
- Adversarial Defense Framework for Graph Neural Network, 📝arXiv
- GraphSAC: Detecting anomalies in large-scale graphs, 📝arXiv
- Edge Dithering for Robust Adaptive Graph Convolutional Networks, 📝arXiv
- Can Adversarial Network Attack be Defended?, 📝arXiv
- GraphDefense: Towards Robust Graph Convolutional Networks, 📝arXiv
- Adversarial Training Methods for Network Embedding, 📝WWW, :octocat:Code
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, 📝IJCAI, :octocat:Code
- Improving Robustness to Attacks Against Vertex Classification, 📝MLG@KDD
- Adversarial Robustness of Similarity-Based Link Prediction, 📝ICDM
- αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model, 📝CIKM
- Batch Virtual Adversarial Training for Graph Convolutional Networks, 📝ICML, :octocat:Code
- Latent Adversarial Training of Graph Convolution Networks, 📝LRGSD@ICML, :octocat:Code
- Characterizing Malicious Edges targeting on Graph Neural Networks, 📝ICLR OpenReview, :octocat:Code
- Comparing and Detecting Adversarial Attacks for Graph Deep Learning, 📝RLGM@ICLR
- Virtual Adversarial Training on Graph Convolutional Networks in Node Classification, 📝PRCV
- Robust Graph Convolutional Networks Against Adversarial Attacks, 📝KDD, :octocat:Code
- Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications, 📝NAACL, :octocat:Code
- Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, 📝IJCAI, :octocat:Code
- Robust Graph Data Learning via Latent Graph Convolutional Representation, 📝arXiv
2018
- Adversarial Personalized Ranking for Recommendation, 📝SIGIR, :octocat:Code
2017
- Adversarial Sets for Regularising Neural Link Predictors, 📝UAI, :octocat:Code
🔐Certification
- Hierarchical Randomized Smoothing, 📝NeurIPS'2023, :octocat:Code
- (Provable) Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More, 📝NeurIPS'2023, :octocat:Code
- Localized Randomized Smoothing for Collective Robustness Certification, 📝ICLR'2023
- Graph Adversarial Immunization for Certifiable Robustness, 📝arXiv'2023
- Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks, 📝NeurIPS'2022, :octocat:Code
- Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation, 📝KDD'2021, :octocat:Code
- Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks, 📝ICLR'2021, :octocat:Code
- Adversarial Immunization for Improving Certifiable Robustness on Graphs, 📝WSDM'2021
- Certifying Robustness of Graph Laplacian Based Semi-Supervised Learning, 📝ICLR OpenReview'2021
- Robust Certification for Laplace Learning on Geometric Graphs, 📝MSML’2021
- Improving the Robustness of Wasserstein Embedding by Adversarial PAC-Bayesian Learning, 📝AAAI'2020
- Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks, 📝NeurIPS'2020, :octocat:Code
- Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing, 📝WWW'2020
- Efficient Robustness Certificates for Discrete Data: Sparsity - Aware Randomized Smoothing for Graphs, Images and More, 📝ICML'2020, :octocat:Code
- Abstract Interpretation based Robustness Certification for Graph Convolutional Networks, 📝ECAI'2020
- Certifiable Robustness of Graph Convolutional Networks under Structure Perturbation, 📝KDD'2020, :octocat:Code
- Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing, 📝GLOBECOM'2020
- Certifiable Robustness and Robust Training for Graph Convolutional Networks, 📝KDD'2019, :octocat:Code
- Certifiable Robustness to Graph Perturbations, 📝NeurIPS'2019, :octocat:Code
⚖Stability
- On the Prediction Instability of Graph Neural Networks, 📝arXiv'2022
- Stability and Generalization Capabilities of Message Passing Graph Neural Networks, 📝arXiv'2022
- Towards a Unified Framework for Fair and Stable Graph Representation Learning, 📝UAI'2021, :octocat:Code
- Training Stable Graph Neural Networks Through Constrained Learning, 📝arXiv'2021
- Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data, 📝NeurIPS'2021, :octocat:Code
- Stability of Graph Convolutional Neural Networks to Stochastic Perturbations, 📝arXiv'2021
- Graph and Graphon Neural Network Stability, 📝arXiv'2020
- On the Stability of Graph Convolutional Neural Networks under Edge Rewiring, 📝arXiv'2020
- Stability of Graph Neural Networks to Relative Perturbations, 📝ICASSP'2020
- Graph Neural Networks: Architectures, Stability and Transferability, 📝arXiv'2020
- Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method, 📝arXiv'2020
- When Do GNNs Work: Understanding and Improving Neighborhood Aggregation, 📝IJCAI Workshop'2019, :octocat:Code
- Stability Properties of Graph Neural Networks, 📝arXiv'2019
- Stability and Generalization of Graph Convolutional Neural Networks, 📝KDD'2019
🚀Others
- Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts, 📝arXiv‘2023, :octocat:Code
- We Cannot Guarantee Safety: The Undecidability of Graph Neural Network Verification, 📝arXiv'2022
- A Systematic Evaluation of Node Embedding Robustness, 📝LoG‘2022, :octocat:Code Generating Adversarial Examples with Graph Neural Networks, 📝UAI'2021
- SIGL: Securing Software Installations Through Deep Graph Learning, 📝USENIX'2021
- FLAG: Adversarial Data Augmentation for Graph Neural Networks, 📝arXiv'2020, :octocat:Code
- Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning, 📝arXiv'2020
- Watermarking Graph Neural Networks by Random Graphs, 📝arXiv'2020
- Training Robust Graph Neural Network by Applying Lipschitz Constant Constraint, 📝CentraleSupélec'2020, :octocat:Code
- CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks, 📝arXiv'2021
- When Does Self-Supervision Help Graph Convolutional Networks?, 📝ICML'2020
- Perturbation Sensitivity of GNNs, 📝cs224w'2019
📃Survey
- Graph Vulnerability and Robustness: A Survey, 📝TKDE'2022
- A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability, 📝arXiv'2022
- Trustworthy Graph Neural Networks: Aspects, Methods and Trends, 📝arXiv'2022
- A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection, 📝arXiv'2022
- A Comparative Study on Robust Graph Neural Networks to Structural Noises, 📝AAAI DLG'2022
- Deep Graph Structure Learning for Robust Representations: A Survey, 📝arXiv'2021
- Robustness of deep learning models on graphs: A survey, 📝AI Open'2021
- Graph Neural Networks Methods, Applications, and Opportunities, 📝arXiv'2021
- Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies, 📝SIGKDD Explorations'2021
- A Survey of Adversarial Learning on Graph, 📝arXiv'2020
- Graph Neural Networks Taxonomy, Advances and Trends, 📝arXiv'2020
- Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack, 📝arXiv'2022
- Adversarial Attacks and Defenses in Images, Graphs and Text: A Review, 📝arXiv'2019
- Deep Learning on Graphs: A Survey, 📝arXiv'2018
- Adversarial Attack and Defense on Graph Data: A Survey, 📝arXiv'2018
⚙Toolbox
- DeepRobust: a Platform for Adversarial Attacks and Defenses, 📝AAAI’2021, :octocat:DeepRobust
- GreatX: A graph reliability toolbox based on PyTorch and PyTorch Geometric, 📝arXiv’2022, :octocat:GreatX
- Evaluating Graph Vulnerability and Robustness using TIGER, 📝arXiv‘2021, :octocat:TIGER
- Graph Robustness Benchmark: Rethinking and Benchmarking Adversarial Robustness of Graph Neural Networks, 📝NeurIPS'2021, :octocat:Graph Robustness Benchmark (GRB)
🔗Resource
- Awesome Adversarial Learning on Recommender System :octocat:Link
- Awesome Graph Attack and Defense Papers :octocat:Link
- Graph Adversarial Learning Literature :octocat:Link
- A Complete List of All (arXiv) Adversarial Example Papers 🌐Link
- Adversarial Attacks and Defenses Frontiers, Advances and Practice, KDD'20 tutorial, 🌐Link
- Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection, KDD'22 tutorial, 🌐Link
- Adversarial Robustness of Representation Learning for Knowledge Graphs, PhD Thesis at Trinity College Dublin, 📝Link