Home

Awesome

Variance Tuning

This repository contains code to reproduce results from the paper:

Enhancing the Transferability of Adversarial Attacks through Variance Tuning (CVPR 2021)

Xiaosen Wang, Kun He

We also include the torch version code in the framework TransferAttack.

Requirements

Qucik Start

Prepare the data and models

You should download the data and pretrained models and place the data and pretrained models in dev_data/ and models/, respectively.

Variance Tuning Attack

All the provided codes generate adversarial examples on inception_v3 model. If you want to attack other models, replace the model in graph and batch_grad function and load such models in main function.

Runing attack

Taking vmi_di_ti_si_fgsm attack for example, you can run this attack as following:

CUDA_VISIBLE_DEVICES=gpuid python vmi_di_ti_si_fgsm.py 

The generated adversarial examples would be stored in directory ./outputs. Then run the file simple_eval.py to evaluate the success rate of each model used in the paper:

CUDA_VISIBLE_DEVICES=gpuid python simple_eval.py

EVaulations setting for Table 4

More details in third_party

Acknowledgments

Code refers to SI-NI-FGSM.

Contact

Questions and suggestions can be sent to xswanghuster@gmail.com.