Home

Awesome

News

Learnable Boundary Guided Adversarial Training

This repository contains the implementation code for the ICCV2021 paper:
Learnable Boundary Guided Adversarial Training (https://arxiv.org/pdf/2011.11164.pdf)

Updates: Training with Epsilon 8/255 on CIFAR-100

#MethodModelNatural AccRobust Acc (AutoAttack)download
1AWPWRN-34-1060.3828.86-
2LBGAT-AWPWRN-34-1062.3130.44-
3LBGAT-AWP*WRN-34-1062.9931.20model / log

Overview

In this paper, we proposed the "Learnable Boundary Guided Adversarial Training" to preserve high natural accuracy while enjoy strong robustness for deep models. An interesting phenomenon in our exploration shows that natural classifier boundary can benefit model robustness to some degree, which is different from the previous work that the improved robustness is at cost of performance degradation on natural data. Our method creates new state-of-the-art model robustness on CIFAR-100 without extra real or Synthetic data under auto-attack benchmark.

image

Results and Pretrained models

`
Models are evaluated under the strongest AutoAttack(https://github.com/fra31/auto-attack) with epsilon 0.031.

Our CIFAR-100 models:
CIFAR-100-LBGAT0-wideresnet-34-10 70.25 vs 27.16
CIFAR-100-LBGAT6-wideresnet-34-10 60.64 vs 29.33
CIFAR-100-LBGAT6-wideresnet-34-20 62.55 vs 30.20

Our CIFAR-10 models:
CIFAR-10-LBGAT0-wideresnet-34-10 88.22 vs 52.86
CIFAR-10-LBGAT0-wideresnet-34-20 88.70 vs 53.57

CIFAR-100 L-inf

Note: this is one partial results list for comparisons with methods without using additional data up to 2020/11/25. Full list can be found at https://github.com/fra31/auto-attack. TRADES (alpha=6) is trained with official open-source code at https://github.com/yaodongyu/TRADES.

#MethodModelNatural AccRobust Acc (AutoAttack)
1LBGAT (Ours)WRN-34-2062.5530.20
2(Gowal et al. 2020)WRN-70-1660.8630.03
3LBGAT (Ours)WRN-34-1060.6429.33
4(Wu et al. 2020)WRN-34-1060.3828.86
5LBGAT (Ours)WRN-34-1070.2527.16
6(Chen et al. 2020)WRN-34-1062.1526.94
7(Zhang et al. 2019) TRADES (alpha=6)WRN-34-1056.5026.87
8(Sitawarin et al. 2020)WRN-34-1062.8224.57
9(Rice et al. 2020)RN-1853.8318.95

CIFAR-10 L-inf

Note: this is one partial results list for comparisons with previous published methods without using additional data up to 2020/11/25. Full list can be found at https://github.com/fra31/auto-attack. TRADES (alpha=6) is trained with official open-source code at https://github.com/yaodongyu/TRADES. “*” denotes methods aiming to speed up adversarial training.

#MethodModelNatural AccRobust Acc (AutoAttack)
1LBGAT (Ours)WRN-34-2088.7053.57
2(Zhang et al.)WRN-34-1084.5253.51
3(Rice et al. 2020)WRN-34-2085.3453.42
4LBGAT (Ours)WRN-34-1088.2252.86
5(Qin et al., 2019)WRN-40-886.2852.84
6(Zhang et al. 2019) TRADES (alpha=6)WRN-34-1084.9252.64
7(Chen et al., 2020b)WRN-34-1085.3251.12
8(Sitawarin et al., 2020)WRN-34-1086.8450.72
9(Engstrom et al., 2019)RN-5087.0349.25
10(Kumari et al., 2019)WRN-34-1087.8049.12
11(Mao et al., 2019)WRN-34-1086.2147.41
12(Zhang et al., 2019a)WRN-34-1087.2044.83
13(Madry et al., 2018) ATWRN-34-1087.1444.04
14(Shafahi et al., 2019)*WRN-34-1086.1141.47
14(Wang & Zhang, 2019)*WRN-28-1092.8029.35

Get Started

Befor the training, please create the directory 'Logs' via the command 'mkdir Logs'.

Training

bash sh/train_lbgat0_cifar100.sh

Evaluation

before running the evaluation, please download the pretrained model.

bash sh/eval_autoattack.sh

Acknowledgements

This code is partly based on the TRADES and autoattack.

Contact

If you have any questions, feel free to contact us through email (jiequancui@link.cuhk.edu.hk) or Github issues. Enjoy!

BibTex

If you find this code or idea useful, please consider citing our work:

@inproceedings{cui2021learnable,
  title={Learnable boundary guided adversarial training},
  author={Cui, Jiequan and Liu, Shu and Wang, Liwei and Jia, Jiaya},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={15721--15730},
  year={2021}
}

@article{cui2023decoupled,
  title={Decoupled Kullback-Leibler Divergence Loss},
  author={Cui, Jiequan and Tian, Zhuotao and Zhong, Zhisheng and Qi, Xiaojuan and Yu, Bei and Zhang, Hanwang},
  journal={arXiv preprint arXiv:2305.13948},
  year={2023}
}