Home

Awesome

Non-Targeted-Adversarial-Attacks

Introduction

This repository contains the code for the top-1 submission to NIPS 2017: Non-targeted Adversarial Attacks Competition.

Method

We propose a momentum iterative method to generate more transferable adversarial examples. We summarize our algorithm in Boosting Adversarial Attacks with Momentum (CVPR 2018, Spotlight).

Basically, the update rule of momentum iterative method is:

equation

Citation

If you use momentum iterative method for attacks in your research, please consider citing

@inproceedings{dong2018boosting,
  title={Boosting Adversarial Attacks with Momentum},
  author={Dong, Yinpeng and Liao, Fangzhou and Pang, Tianyu and Su, Hang and Zhu, Jun and Hu, Xiaolin and Li, Jianguo},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018}
}

Implementation

Models

We use the ensemble of eight models in our submission, many of which are adversarially trained models. The models can be downloaded here.

If you want to attack other models, you can replace the model definition part to your own models.

Dataset

We use a subset of ImageNet validation set containing 1000 images, most of which are correctly classified by those models. Our dataset can be downloaded at http://ml.cs.tsinghua.edu.cn/~yinpeng/adversarial/dataset.zip. You can alternatively use the NIPS 2017 competition official dataset.

Cleverhans

We also implement this method in Cleverhans.

Targeted Attacks

Please find the targeted attacks at https://github.com/dongyp13/Targeted-Adversarial-Attack.