Home

Awesome

Dent: Dynamic Defenses against Adversarial Attacks

This is the official project repository for Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks by Dequan Wang, An Ju, Evan Shelhamer, David Wagner, and Trevor Darrell.

Abstract

Adversarial attacks optimize against models to defeat defenses. We argue that models should fight back, and optimize their defenses against attacks at test-time. Existing defenses are static, and stay the same once trained, even while attacks change. We propose a dynamic defense, defensive entropy minimization (dent), to adapt the model and input during testing by gradient optimization. Our dynamic defense adapts fully at test-time, without altering training, which makes it compatible with existing models and standard defenses. Dent improves robustness to attack by 20+ points (absolute) for state-of-the-art static defenses against AutoAttack on CIFAR-10 at epsilon (L_infinity) = 8/255.

Example: Model Adaptation for Defense against AutoAttack on CIFAR-10

This example compares state-of-the-art adversarial training defenses, which are static, with and without our method for defense entropy minimization (dent), which is dynamic. We evaluate against white-box and black-box attacks and report the worst-case accuracy across attack types.

Result:

Dent improves adversarial/robust accuracy (%) by more than 30 percent (relative) against AutoAttack on CIFAR-10 while preserving natural/clean accuracy. Our dynamic defense brings adversarial accuracy within 90% of natural accuracy for the three most accurate methods tested (Wu 2020, Carmon 2019, Sehwag 2020). The static defenses alter training, while dent alters testing, and so this separation of concerns makes dent compatible with many existing models and defenses.

Model IDPaperNatural (static)Natural (dent)Adversarial (static)Adversarial (dent)Venue
Wu2020Adversarial_extraAdversarial Weight Perturbation Helps Robust Generalization88.2587.6560.0480.33NeurIPS 2020
Carmon2019UnlabeledUnlabeled Data Improves Adversarial Robustness89.6989.3259.5382.28NeurIPS 2019
Sehwag2020HydraHYDRA: Pruning Adversarially Robust Neural Networks88.9888.6057.1478.09NeurIPS 2020
Wang2020ImprovingImproving Adversarial Robustness Requires Revisiting Misclassified Examples87.5086.3256.2977.31ICLR 2020
Hendrycks2019UsingUsing Pre-Training Can Improve Model Robustness and Uncertainty87.1187.0454.9279.62ICML 2019
Wong2020FastFast is better than free: Revisiting adversarial training83.3482.3443.2171.82ICLR 2020
Ding2020MMAMMA Training: Direct Input Space Margin Maximization through Adversarial Training84.3684.6841.4464.35ICLR 2020

Usage:

python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Wu2020Adversarial_extra
python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Carmon2019Unlabeled
python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Sehwag2020Hydra
python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Wang2020Improving
python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Hendrycks2019Using
python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Wong2020Fast
python cifar10a.py --cfg cfgs/dent.yaml MODEL.ARCH Ding2020MMA

Correspondence

Please contact Dequan Wang, An Ju, and Evan Shelhamer at dqwang AT eecs.berkeley.edu, an_ju AT berkeley.edu, and shelhamer AT deepmind.com.

Citation

If the dent method or dynamic defense setting are helpful in your research, please consider citing our paper:

@article{wang2021fighting,
  title={Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks},
  author={Wang, Dequan and Ju, An and Shelhamer, Evan and Wagner, David and Darrell, Trevor},
  journal={arXiv preprint arXiv:2105.08714},
  year={2021}
}

Note: a workshop edition of this project was presented at the ICLR'21 Workshop on Security and Safety in Machine Learning Systems.