Awesome
EasyRobust
<div align="center"> </div>What's New
-
[Apr 2024] Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging was accepted by T-IFS 2024! Codes will be avaliable at examples/imageclassification/cifar10/adversarial_training/fgsm_law
-
[Jul 2023] COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts was accepted by ICCV 2023! Dataset will be avaliable at benchmarks/coco_o
-
[Jul 2023] Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training was accepted by INTERSPEECH 2023! Codes will be avaliable at examples/asr/WAPAT
-
[Feb 2023] ImageNet-E: Benchmarking Neural Network Robustness against Attribute Editing was accepted by CVPR 2023! Codes will be avaliable at benchmarks/imagenet-e
-
[Feb 2023] TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations was accepted by ICASSP 2023! Codes will be avaliable at examples/attacks/transaudio
-
[Jan 2023] Inequality phenomenon in $l_\infty$-adversarial training, and its unrealized threats was accepted by ICLR 2023 as notable-top-25%! Codes will be avaliable at examples/attacks/inequality
-
[Oct 2022]: Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective was accepted by TIP 2022! Codes will be avaliable at examples/attacks/dra
-
[Sep 2022]: Boosting Out-of-distribution Detection with Typical Features was accepted by NeurIPS 2022! Codes avaliable at examples/ood_detection/BATS
-
[Sep 2022]: Enhance the Visual Representation via Discrete Adversarial Training was accepted by NeurIPS 2022! Codes avaliable at examples/imageclassification/imagenet/dat
-
[Sep 2022]: Updating 5 methods for analysing your robust models under tools/.
-
[Sep 2022]: Updating 13 reproducing examples of robust training methods under examples/imageclassification/imagenet.
-
[Sep 2022]: Releasing 16 Adversarial Training models, including a Swin-B which achieves SOTA adversairal robustness with 47.42% on AutoAttack!
-
[Sep 2022]: EasyRobust v0.2.0 released.
Our Research Project
- [T-IFS 2024] Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging [Paper, Code]
- [ICCV 2023] COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts [Paper, COCO-O dataset]
- [INTERSPEECH 2023] Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training [Paper, Code]
- [CVPR 2023] ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing [Paper, Image editing toolkit, ImageNet-E dataset]
- [ICLR 2023] Inequality phenomenon in $l_\infty$-adversarial training, and its unrealized threats [Paper, Code]
- [ICASSP 2023] TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations [Paper, Code]
- [TIP 2022] Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective [Paper, Code]
- [NeurIPS 2022] Boosting Out-of-distribution Detection with Typical Features [Paper, Code]
- [NeurIPS 2022] Enhance the Visual Representation via Discrete Adversarial Training [Paper, Code]
- [CVPR 2022] Towards Robust Vision Transformer [Paper, Code]
Introduction
EasyRobust is an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch. EasyRobust aims to accelerate research cycle in robust vision, by collecting comprehensive robust training techniques and benchmarking them with various robustness metrics. The key features includes:
-
Reproducible implementation of SOTA in Robust Image Classification: Most existing SOTA in Robust Image Classification are implemented - Adversarial Training, AdvProp, SIN, AugMix, DeepAugment, DrViT, RVT, FAN, APR, HAT, PRIME, DAT and so on.
-
Benchmark suite: Variety of benchmarks tasks including ImageNet-A, ImageNet-R, ImageNet-Sketch, ImageNet-C, ImageNetV2, Stylized-ImageNet, ObjectNet.
-
Scalability: You can use EasyRobust to conduct 1-gpu training, multi-gpu training on single machine and large-scale multi-node training.
-
Model Zoo: Open source more than 30 pretrained adversarially or non-adversarially robust models.
-
Analytical tools: Support analysis and visualization about a pretrained robust model, including Attention Visualization, Decision Boundary Visualization, Convolution Kernel Visualization, Shape vs. Texture Biases Analysis, etc. Using these tools can help us to explain how robust training improves the interpretability of the model.
Technical Articles
We have a series of technical articles on the functionalities of EasyRobust.
- NeurIPS2022 阿里浙大提出利用更典型的特征来提升分布外检测性能
- 顶刊TIP 2022!阿里提出:从分布视角出发理解和提升对抗样本的迁移性
- 无惧对抗和扰动、增强泛化,阿里安全打造更鲁棒的ViT模型,论文入选CVPR 2022
- NeurIPS2022 阿里提出基于离散化对抗训练的鲁棒视觉新基准
Installation
Install from Source:
$ git clone https://github.com/alibaba/easyrobust.git
$ cd easyrobust
$ pip install -e .
Install from PyPI:
$ pip install easyrobust
download the ImageNet dataset and place into /path/to/imagenet
. Specify $ImageNetDataDir
as ImageNet path by:
$ export ImageNetDataDir=/path/to/imagenet
[Optional]: If you use EasyRobust to evaluate the model robustness, download the benchmark dataset by:
$ sh download_data.sh
[Optional]: If you use analysis tools in tools/
, install extra requirements by:
$ pip install -r requirements/optional.txt
Docker
We have provided a runnable environment in docker/Dockerfile
for users who do not want to install by pip. To use it, please confirm that docker
and nvidia-docker
have installed. Then run the following command:
docker build -t alibaba/easyrobust:v1 -f docker/Dockerfile .
Getting Started
EasyRobust focuses on the basic usages of: (1) Evaluate and benchmark the robustness of a pretrained models and (2) Train your own robust models or reproduce the results of previous SOTA methods.
1. How to evaluate and benchmark the robustness of given models?
It only requires a few lines to evaluate the robustness of a model using EasyRobust. We give a minimalist example in benchmarks/resnet50_example.py:
#############################################################
# Define your model
#############################################################
model = torchvision.models.resnet50(pretrained=True)
model = model.eval()
if torch.cuda.is_available(): model = model.cuda()
#############################################################
# Start Evaluation
#############################################################
# ood
evaluate_imagenet_val(model, 'benchmarks/data/imagenet-val')
evaluate_imagenet_a(model, 'benchmarks/data/imagenet-a')
evaluate_imagenet_r(model, 'benchmarks/data/imagenet-r')
evaluate_imagenet_sketch(model, 'benchmarks/data/imagenet-sketch')
evaluate_imagenet_v2(model, 'benchmarks/data/imagenetv2')
evaluate_stylized_imagenet(model, 'benchmarks/data/imagenet-style')
evaluate_imagenet_c(model, 'benchmarks/data/imagenet-c')
# objectnet is optional since it spends a lot of disk storage. we skip it here.
# evaluate_objectnet(model, 'benchmarks/data/ObjectNet/images')
# adversarial
evaluate_imagenet_autoattack(model, 'benchmarks/data/imagenet-val')
You can do evaluation by simply running the command: python benchmarks/resnet50_example.py
. After running is completed, your will get the following output:
Top1 Accuracy on the ImageNet-Val: 76.1%
Top1 Accuracy on the ImageNet-A: 0.0%
Top1 Accuracy on the ImageNet-R: 36.2%
Top1 Accuracy on the ImageNet-Sketch: 24.1%
Top1 Accuracy on the ImageNet-V2: 63.2%
Top1 Accuracy on the Stylized-ImageNet: 7.4%
Top1 accuracy 39.2%, mCE: 76.7 on the ImageNet-C
Top1 Accuracy on the AutoAttack: 0.0%
2. How to use EasyRobust to train my own robust models?
We implement most robust training methods in the folder examples/imageclassification/imagenet/
. All of them are based on a basic training script: examples/imageclassification/imagenet/base_training_script.py. By comparing the difference, you can clearly see where and which hyperparameters of basic training are modified to create a robust training example. Below we present the tutorials of some classic methods:
- Adversarial Training on ImageNet using 8 GPUs
- AugMix Training on ImageNet with 180 Epochs
- AdvProp for Improving Non-adversarial Robustness and Accuracy
- Using Stylized ImageNet as Extended Data for Training
- Discrete Adversarial Training for ViTs
- Training Robust Vision Transformers (RVT) with 300 Epochs
- Robust Finetuning of CLIP Models
Analytical Tools
see tools/README.md
Model Zoo and Baselines
Submit your models
We provide a tool benchmarks/benchmark.py
to help users directly benchmark their models:
Usage:
python benchmarks/benchmark.py [OPTIONS...]
OPTIONS:
--model [ARCH in timm]
--data_dir [PATH of the bencmark datasets]
--ckpt_path [URL or PATH of the model weights]
If you are willing to submit the model to our benchmarks, you can prepare a python script similar to benchmarks/benchmark.py
and weights file xxx.pth
, zip all the files. Then open an issue with the "Submit Model" template and provide a json storing submit information. Below is a submission template in adversarial robustness benchmark of image classification:
## Submit Json Information
{"date": "19/06/2017",
"extra_data": "no",
"model": "<b>Adversarial Training</b>",
"institution": "MIT",
"paper_link": "https://arxiv.org/abs/1706.06083",
"code_link": "",
"architecture": "swin-b",
"training framework": "easyrobust (v1)",
"ImageNet-val": 75.05,
"autoattack": 47.42,
"files": "<a href=http://alisec-competition.oss-cn-shanghai.aliyuncs.com/xiaofeng/imagenet_pretrained_models/advtrain_models/advtrain_swin_base_patch4_window7_224_ep4.pth >download</a>",
"advrob_imgcls_leaderboard": true,
"oodrob_imgcls_leaderboard": false,
"advrob_objdet_leaderboard": false,
"oodrob_objdet_leaderboard": false}
We will check the result and present your result into the benchmark if there is no problem. For submission template of other benchmarks, check submit-model.md.
Below is the model zoo and benchmark of the EasyRobust. All the results are runned by benchmarks/adv_robust_bench.sh and benchmarks/non_adv_robust_bench.sh.
Adversarial Robust Benchmark (sorted by AutoAttack)
Non-Adversarial Robust Benchmark (sorted by ImageNet-C)
Training Framework | Method | Model | Files | ImageNet-Val | V2 | C (mCE↓) | R | A | Sketch | Stylized | ObjectNet |
---|---|---|---|---|---|---|---|---|---|---|---|
EasyRobust (Ours) | DAT | ViT-B/16 | ckpt/args/logs | 81.38% | 69.99% | 45.59 | 49.64% | 24.61% | 36.46% | 24.84% | 20.12% |
EasyRobust (Ours) | - | RVT-S* | ckpt/args/logs | 82.10% | 71.40% | 48.22 | 47.84% | 26.93% | 35.34% | 20.71% | 23.24% |
Official | - | RVT-S* | ckpt | 81.82% | 71.05% | 49.42 | 47.33% | 26.53% | 34.22% | 20.48% | 23.11% |
EasyRobust (Ours) | - | DrViT-S | ckpt/args/logs | 80.66% | 69.62% | 49.96 | 43.68% | 20.79% | 31.13% | 17.89% | 20.50% |
- | - | DrViT-S | - | 77.03% | 64.49% | 56.89 | 39.02% | 11.85% | 28.78% | 14.22% | 26.49% |
Official | PRIME | ResNet50 | ckpt | 76.91% | 65.42% | 57.49 | 42.20% | 2.21% | 29.82% | 13.94% | 16.59% |
EasyRobust (Ours) | PRIME | ResNet50 | ckpt/args/logs | 76.64% | 64.37% | 57.62 | 41.95% | 2.07% | 29.63% | 13.56% | 16.28% |
EasyRobust (Ours) | DeepAugment | ResNet50 | ckpt/args/logs | 76.58% | 64.77% | 60.27 | 42.80% | 3.62% | 29.65% | 14.88% | 16.88% |
Official | DeepAugment | ResNet50 | ckpt | 76.66% | 65.24% | 60.37 | 42.17% | 3.46% | 29.50% | 14.68% | 17.13% |
EasyRobust (Ours) | Augmix | ResNet50 | ckpt/args/logs | 77.81% | 65.60% | 64.14 | 43.34% | 4.04% | 29.81% | 12.33% | 17.21% |
EasyRobust (Ours) | APR | ResNet50 | ckpt/args/logs | 76.28% | 64.78% | 64.89 | 42.17% | 4.18% | 28.90% | 13.03% | 16.78% |
Official | Augmix | ResNet50 | ckpt | 77.54% | 65.42% | 65.27 | 41.04% | 3.78% | 28.48% | 11.24% | 17.54% |
Official | APR | ResNet50 | ckpt | 75.61% | 64.24% | 65.56 | 41.35% | 3.20% | 28.37% | 13.01% | 16.61% |
Official | S&T Debiased | ResNet50 | ckpt | 76.91% | 65.04% | 67.55 | 40.81% | 3.50% | 28.41% | 17.40% | 17.38% |
EasyRobust (Ours) | SIN+IN | ResNet50 | ckpt/args/logs | 75.46% | 63.50% | 67.73 | 42.34% | 2.47% | 31.39% | 59.37% | 16.17% |
Official | SIN+IN | ResNet50 | ckpt | 74.59% | 62.43% | 69.32 | 41.45% | 1.95% | 29.69% | 57.38% | 15.93% |
Non-Official | AdvProp | ResNet50 | ckpt | 77.04% | 65.27% | 70.81 | 40.13% | 3.45% | 25.95% | 10.01% | 18.23% |
EasyRobust (Ours) | S&T Debiased | ResNet50 | ckpt/args/logs | 77.21% | 65.10% | 70.98 | 38.59% | 3.28% | 26.09% | 14.59% | 16.99% |
EasyRobust (Ours) | AdvProp | ResNet50 | ckpt/args/logs | 76.64% | 64.35% | 77.64 | 37.43% | 2.83% | 24.71% | 7.33% | 16.82% |
Credits
EasyRobust concretizes previous excellent works by many different authors. We'd like to thank, in particular, the following implementations which have helped us in our development:
- timm @rwightman and the training script.
- robustness @MadryLab and autoattack @fra31 for attack implementation.
- modelvshuman @bethgelab for model analysis.
- AdaIN @naoto0804 for style trnsfer and VQGAN @CompVis for image discretization.
- All the authors and implementations of the robustness research work we refer in this library.
Citing EasyRobust
We provide a BibTeX entry for users who apply EasyRobust to help their research:
@misc{mao2022easyrobust,
author = {Xiaofeng Mao and Yuefeng Chen and Xiaodan Li and Gege Qi and Ranjie Duan and Rong Zhang and Hui Xue},
title = {EasyRobust: A Comprehensive and Easy-to-use Toolkit for Robust Computer Vision},
howpublished = {\url{https://github.com/alibaba/easyrobust}},
year = {2022}
}