Home

Awesome

BackdoorBox: An Open-sourced Python Toolbox for Backdoor Attacks and Defenses

Python 3.8 Pytorch 1.8.0 torchvision 0.9.0 CUDA 11.1 License GPL

Backdoor attacks are emerging yet critical threats in the training process of deep neural networks (DNNs), where the adversary intends to embed specific hidden backdoor into the models. The attacked DNNs will behave normally in predicting benign samples, whereas the predictions will be maliciously changed whenever the adversary-specified trigger patterns appear. Currently, there were many existing backdoor attacks and defenses. Although most of them were open-sourced, there is still no toolbox that can easily and flexibly implement and compare them simultaneously.

BackdoorBox is an open-sourced Python toolbox, aiming to implement representative and advanced backdoor attacks and defenses under a unified framework that can be used in a flexible manner. We will keep updating this toolbox to track the latest backdoor attacks and defenses.

Currently, this toolbox is still under development (but the attack parts are almost done) and there is no user manual yet. However, users can easily implement our provided methods by referring to the tests sub-folder to see the example codes of each implemented method. Please refer to our paper for more details! In particular, you are always welcome to contribute your backdoor attacks or defenses by pull requests!

Toolbox Characteristics

Backdoor Attacks

MethodSourceKey PropertiesAdditional Notes
BadNetsBadnets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access, 2019.poison-onlyfirst backdoor attack
BlendedTargeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv, 2017.poison-only, invisiblefirst invisible attack
Refool (simplified version)Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks. ECCV, 2020.poison-only, sample-specificfirst stealthy attack with visible yet natural trigger
LabelConsistentLabel-Consistent Backdoor Attacks. arXiv, 2019.poison-only, invisible, clean-labelfirst clean-label backdoor attack
TUAPClean-Label Backdoor Attacks on Video Recognition Models. CVPR, 2020.poison-only, invisible, clean-labelfirst clean-label backdoor attack with optimized trigger pattern
SleeperAgentSleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch. NeurIPS, 2022.poison-only, invisible, clean-labeleffective clean-label backdoor attack
ISSBAInvisible Backdoor Attack with Sample-Specific Triggers. ICCV, 2021.poison-only, sample-specific, physicalfirst poison-only sample-specific attack
WaNetWaNet - Imperceptible Warping-based Backdoor Attack. ICLR, 2021.poison-only, invisible, sample-specific
Blind (blended-based)Blind Backdoors in Deep Learning Models. USENIX Security, 2021.training-controlledfirst training-controlled attack targeting loss computation
IADInput-Aware Dynamic Backdoor Attack. NeurIPS, 2020.training-controlled, optimized, sample-specificfirst training-controlled sample-specific attack
PhysicalBABackdoor Attack in the Physical World. ICLR Workshop, 2021.training-controlled, physicalfirst physical backdoor attack
LIRALIRA: Learnable, Imperceptible and Robust Backdoor Attacks. ICCV, 2021.training-controlled, invisible, optimized, sample-specific
BATTBATT: Backdoor Attack with Transformation-based Triggers. ICASSP, 2023.poison-only, invisible, physical

Note: For the convenience of users, all our implemented attacks support obtaining poisoned dataset (via .get_poisoned_dataset()), obtaining infected model (via .get_model()), and training with your own local samples (loaded via torchvision.datasets.DatasetFolder). Please refer to base.py and the attack's codes for more details.

Backdoor Defenses

MethodSourceDefense TypeAdditional Notes
AutoEncoderDefenseNeural Trojans. ICCD, 2017.Sample Pre-processingfirst pre-processing-based defense
ShrinkPadBackdoor Attack in the Physical World. ICLR Workshop, 2021.Sample Pre-processingefficient defense
FineTuningFine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. RAID, 2018.Model Repairingfirst defense based on model repairing
PruningFine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. RAID, 2018.Model Repairing
MCRBridging Mode Connectivity in Loss Landscapes and Adversarial Robustness. ICLR, 2020.Model Repairing
NADNeural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. ICLR, 2021.Model Repairingfirst distillation-based defense
ABLAnti-Backdoor Learning: Training Clean Models on Poisoned Data. NeurIPS, 2021.Poison Suppression
SCALE-UPSCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency. ICLR, 2023.Input-level Backdoor Detectionblack-box online detection
IBD-PSCIBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency. ICML, 2024.Input-level Backdoor Detectionsimple yet effective, safeguarded by theoretical analysis

Methods Under Development

Attack & Defense Benchmark

The benchmark is coming soon.

Contributors

OrganizationContributors
Tsinghua UniversityYiming Li, Mengxi Ya, Guanhao Gan, Kuofeng Gao, Xin Yan, Jia Xu, Tong Xu, Sheng Yang, Haoxiang Zhong, Linghui Zhu
Tencent Security Zhuque LabYang Bai
ShanghaiTech UniversityZhe Zhao
Harbin Institute of Technology, ShenzhenLinshan Hou

Citation

If our toolbox is useful for your research, please cite our paper(s) as follows:

@inproceedings{li2023backdoorbox,
  title={{BackdoorBox}: A Python Toolbox for Backdoor Learning},
  author={Li, Yiming and Ya, Mengxi and Bai, Yang and Jiang, Yong and Xia, Shu-Tao},
  booktitle={ICLR Workshop},
  year={2023}
}
@article{li2022backdoor,
  title={Backdoor learning: A survey},
  author={Li, Yiming and Jiang, Yong and Li, Zhifeng and Xia, Shu-Tao},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2022}
}