Home

Awesome

Delving into Transferable Adversarial Examples and Black-box Attacks

This repo provides the code to replicate the experiments in the paper. It is still under development so ensemble models and other experiements will be added later.

Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song, <cite> Delving into Transferable Adversarial Examples and Black-box Attacks </cite>, in Proceedings of 5th International Conference on Learning Representations (ICLR 2017)

Paper [arXiv] [OpenReview]

Datasets

ILSVRC12

You can get the dataset by

cd scripts
bash retrieve_data.sh

Or download validation dataset from official website: [ImageNet] to data/test_data folder

The image_label_target.csv under data folder is the images and their targets used for the paper.

Usage

Model architectures

The code currently only supports GoogleNet, will add more models in the later updates

Run experiments

In the following we list some important arguments for our python codes:

You can run experiment of FG/FGS method using following command

python FG_and_FGS.py -i test -o output/GoogleNet --model GoogleNet --file_list test/test_file_list.txt

You can also run optimization-based method using the following command

python Optimization.py -i test -o output/GoogleNet --model GoogleNet --file_list test/test_file_list.txt

Citation

If you use the code in this repo, please cite the following paper:

@inproceedings{liu2017delving,
  author    = {Yanpei Liu and
               Xinyun Chen and
               Chang Liu and
               Dawn Song},
  title     = {Delving into Transferable Adversarial Examples and Black-box Attacks},
  year      = {2017},
  booktitle = {Proceedings of 5th International Conference on Learning Representations},
}