Awesome
<p align="center">Randomized Adversarial Training</p>
Requisite
How to use
Randomized adversarial training based on AWP-TRADES (CIFAR-10/100):
python AWP_first_second/train_first_second.py
Randomized adversarial training based on TRADES (CIFAR-10/100):
python TRADES_first_second/train_first_second.py
Note that, to further reduce complexity, we implement First and Second Taylor terms with an approximation method described in Appendix D of the paper. When optimizing second Taylor terms in practice, we use Hadamard product to replace Kronecker product.
Evaluation
PGD and CW evaluation with epsilon=0.031:
python eval_attack.py
Auto attack evaluation is under standard version with epsilon=8/255:
python eval_autoattack.py
We got the best performance between epoch 100-200.
Reference Code
[1] AT: https://github.com/locuslab/robust_overfitting
[2] TRADES: https://github.com/yaodongyu/TRADES/
[3] AutoAttack: https://github.com/fra31/auto-attack
[4] MART: https://github.com/YisenWang/MART
[5] AWP: https://github.com/csdongxian/AWP
[6] AVMixup: https://github.com/hirokiadachi/Adversarial-vertex-mixup-pytorch
[7] S2O: https://github.com/Alexkael/S2O
Citation
If you find our paper and repo useful, please cite our paper:
@article{jin2023randomized,
title={Randomized Adversarial Training via Taylor Expansion},
author={Jin, Gaojie and Yi, Xinping and Wu, Dengyu and Mu, Ronghui and Huang, Xiaowei},
journal={CVPR 2023},
year={2023}
}