Home

Awesome

Adversarial Vertex mixup

Code for the article "Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization" (https://arxiv.org/abs/2003.02484), to be presented at CVPR 2020 (Oral presentation)

This repository is forked from https://github.com/MadryLab/cifar10_challenge, to improve the robust generalization of PGD-based adversarial training (https://arxiv.org/abs/1706.06083).

The following parts have been modified for the AVmixup implementation.

  1. Some configurations for AVmixup have been added to config.json.
  2. The label encoding has been changed from integer to one-hot encoding (for label-smoothing).
  3. AVmixup function has been implemented in pgd_attack.py

For reference, we leave the description of the original repository below.

A pre-trained AVmixup model

PGD

https://www.dropbox.com/s/uh0gdr44rluvpnz/AVmixup_model.tar.gz?dl=0

FeatureScatter

https://www.dropbox.com/s/blurxqpnkr89wtb/checkpoint-198?dl=0

CIFAR10 Adversarial Examples Challenge

Recently, there has been much progress on adversarial attacks against neural networks, such as the cleverhans library and the code by Carlini and Wagner. We now complement these advances by proposing an attack challenge for the CIFAR10 dataset which follows the format of our earlier MNIST challenge. We have trained a robust network, and the objective is to find a set of adversarial examples on which this network achieves only a low accuracy. To train an adversarially-robust network, we followed the approach from our recent paper:

Towards Deep Learning Models Resistant to Adversarial Attacks <br> Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu <br> https://arxiv.org/abs/1706.06083.

As part of the challenge, we release both the training code and the network architecture, but keep the network weights secret. We invite any researcher to submit attacks against our model (see the detailed instructions below). We will maintain a leaderboard of the best attacks for the next two months and then publish our secret network weights.

Analogously to our MNIST challenge, the goal of this challenge is to clarify the state-of-the-art for adversarial robustness on CIFAR10. Moreover, we hope that future work on defense mechanisms will adopt a similar challenge format in order to improve reproducibility and empirical comparisons.

Update 2017-12-10: We released our secret model. You can download it by running python fetch_model.py secret. As of Dec 10 we are no longer accepting black-box challenge submissions. We have set up a leaderboard for white-box attacks on the (now released) secret model. The submission format is the same as before. We plan to continue evaluating submissions and maintaining the leaderboard for the foreseeable future.

Black-Box Leaderboard (Original Challenge)

AttackSubmitted byAccuracySubmission Date
PGD on the cross-entropy loss for the<br> adversarially trained public network(initial entry)63.39%Jul 12, 2017
PGD on the CW loss for the<br> adversarially trained public network(initial entry)64.38%Jul 12, 2017
FGSM on the CW loss for the<br> adversarially trained public network(initial entry)67.25%Jul 12, 2017
FGSM on the CW loss for the<br> naturally trained public network(initial entry)85.23%Jul 12, 2017

White-Box Leaderboard

AttackSubmitted byAccuracySubmission Date
PGD attack with Output Diversified InitializationYusuke Tashiro43.99%Feb 15, 2020
MultiTargetedSven Gowal44.03%Aug 28, 2019
FAB: Fast Adaptive Boundary AttackFrancesco Croce44.51%Jun 7, 2019
Distributionally Adversarial AttackTianhang Zheng44.71%Aug 21, 2018
20-step PGD on the cross-entropy loss<br> with 10 random restartsTianhang Zheng45.21%Aug 24, 2018
20-step PGD on the cross-entropy loss(initial entry)47.04%Dec 10, 2017
20-step PGD on the CW loss(initial entry)47.76%Dec 10, 2017
FGSM on the CW loss(initial entry)54.92%Dec 10, 2017
FGSM on the cross-entropy loss(initial entry)55.55%Dec 10, 2017

Format and Rules

The objective of the challenge is to find black-box (transfer) attacks that are effective against our CIFAR10 model. Attacks are allowed to perturb each pixel of the input image by at most epsilon=8.0 on a 0-255 pixel scale. To ensure that the attacks are indeed black-box, we release our training code and model architecture, but keep the actual network weights secret.

We invite any interested researchers to submit attacks against our model. The most successful attacks will be listed in the leaderboard above. As a reference point, we have seeded the leaderboard with the results of some standard attacks.

The CIFAR10 Model

We used the code published in this repository to produce an adversarially robust model for CIFAR10 classification. The model is a residual convolutional neural network consisting of five residual units and a fully connected layer. This architecture is derived from the "w32-10 wide" variant of the Tensorflow model repository. The network was trained against an iterative adversary that is allowed to perturb each pixel by at most epsilon=8.0.

The random seed used for training and the trained network weights will be kept secret.

The sha256() digest of our model file is:

555be6e892372599380c9da5d5f9802f9cbd098be8a47d24d96937a002305fd4

We will release the corresponding model file on September 15 2017, which is roughly two months after the start of this competition. Edit: We are extending the deadline for submitting attacks to October 15th due to requests.

The Attack Model

We are interested in adversarial inputs that are derived from the CIFAR10 test set. Each pixel can be perturbed by at most epsilon=8.0 from its initial value on the 0-255 pixel scale. All pixels can be perturbed independently, so this is an l_infinity attack.

Submitting an Attack

Each attack should consist of a perturbed version of the CIFAR10 test set. Each perturbed image in this test set should follow the above attack model.

The adversarial test set should be formated as a numpy array with one row per example and each row containing a 32x32x3 array of pixels. Hence the overall dimensions are 10,000x32x32x3. Each pixel must be in the [0, 255] range. See the script pgd_attack.py for an attack that generates an adversarial test set in this format.

In order to submit your attack, save the matrix containing your adversarial examples with numpy.save and email the resulting file to cifar10.challenge@gmail.com. We will then run the run_attack.py script on your file to verify that the attack is valid and to evaluate the accuracy of our secret model on your examples. After that, we will reply with the predictions of our model on each of your examples and the overall accuracy of our model on your evaluation set.

If the attack is valid and outperforms all current attacks in the leaderboard, it will appear at the top of the leaderboard. Novel types of attacks might be included in the leaderboard even if they do not perform best.

We strongly encourage you to disclose your attack method. We would be happy to add a link to your code in our leaderboard.

Overview of the Code

The code consists of seven Python scripts and the file config.json that contains various parameter settings.

Running the code

Parameters in config.json

Model configuration:

Training configuration:

Evaluation configuration:

Adversarial examples configuration:

Example usage

After cloning the repository you can either train a new network or evaluate/attack one of our pre-trained networks.

Training a new network

python train.py
python eval.py

Download a pre-trained network

python fetch_model.py adv_trained

and use the config.json file to set "model_dir": "models/adv_trained".

python fetch_model.py natural

and use the config.json file to set "model_dir": "models/naturally_trained".

Test the network

python pgd_attack.py
python run_attack.py