Awesome
Cross-Domain Transferable Perturbations
<!--[Project Page](https://muzammal-naseer.github.io/Cross-domain-perturbations/)-->Pytorch Implementation of "Cross-Domain Transferability of Adversarial Perturbations" (NeurIPS 2019) arXiv link.
Table of Contents
- Highlights <a name="Highlights"/>
- Usage <a name="Usage"/>
- Pretrained-Generators <a name="Pretrained-Generators"/>
- How to Set-Up Data <a name="Datasets"/>
- Training/Eval <a name="Training"/>
- Create-Adversarial-Dataset <a name="Create-Adversarial-Dataset"/>
- Citation <a name="Citation"/>
Highlights
- The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on different domains. The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbation.
- We mainly focus on image classfication task but you can use our pretrained adversarial generators to test robustness of your model regardless of the task (Image classification, Segmentation, Object Detection etc.)
- You don't need any particular setup (label etc.) to generate adversaries using our method. You can generate adversarial images of any size for any image dataset of your choice (see how to set-up data directory below).
Usage
Dependencies
- Install pytorch.
- Install python packages using following command:
pip install -r requirements.txt
Clone the repository.
git clone https:https://github.com/Muzammal-Naseer/Cross-domain-perturbations.git
cd Cross-domain-perturbations
Pretrained-Generators
Download pretrained adversarial generators from here to 'saved_models' folder.
Adversarial generators are trained against following four models.
- ResNet152
- Inceptionv3
- VGG19
- VGG16
These models are trained on ImageNet and available in Pytorch.
Datasets
- Training data:
- Evaluations data:
- ImageNet Validation Set (50k images).
- Subset of ImageNet validation set (5k images).
- NeurIPS dataset (1k images).
- You can try your own dataset as well.
- Directory structure should look like this:
|Root
|ClassA
img1
img2
...
|ClassB
img1
img2
...
Training
<p align="justify"> Run the following command python train.py --model_type res152 --train_dir paintings --eps 10
This will start trainig a generator trained on Paintings (--train_dir) against ResNet152 (--model_type) under perturbation budget 10 (--eps) with relativistic supervisory signal.<p>
Evaluations
<p align="justify"> Run the following command python eval.py --model_type res152 --train_dir imagenet --test_dir ../IN/val --epochs 0 --model_t vgg19 --eps 10 --measure_adv --rl
This will load a generator trained on ImageNet (--train_dir) against ResNet152 (--model_type) and evaluate clean and adversarial accuracy of VGG19 (--model_t) under perturbation budget 10 (--eps). <p>
Create-Adversarial-Dataset
<p align="justify"> If you need to save adversaries for visualization or adversarial training, run the following command: python generate_and_save_adv.py --model_type incv3 --train_dir paintings --test_dir 'your_data/' --eps 255
You should see beautiful images (unbounded adversaries) like this:
Citation
If you find our work, this repository and pretrained adversarial generators useful. Please consider giving a star :star: and citation.
@article{naseer2019cross,
title={Cross-domain transferability of adversarial perturbations},
author={Naseer, Muhammad Muzammal and Khan, Salman H and Khan, Muhammad Haris and Shahbaz Khan, Fahad and Porikli, Fatih},
journal={Advances in Neural Information Processing Systems},
volume={32},
pages={12905--12915},
year={2019}
}
Contact
Muzammal Naseer - muzammal.naseer@anu.edu.au <br/> Suggestions and questions are welcome!