Awesome
Output Diversified Sampling (ODS)
This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks".
Requirement
Please install PyTorch, pickle, argparse, and numpy
Running experiments
ODS for score-based black-box attacks
The following experiments combine ODS with Simple Black-Box Attack (SimBA).
Evaluation:
The evaluation is held for 5 sample images on ImageNet (images are already resized and cropped).
# untargeted settings with ODS:
python blackbox_simbaODS.py --num_sample 5 --ODS
# targeted settings with ODS:
python blackbox_simbaODS.py --num_sample 5 --num_step 30000 --ODS --targeted
ODS for decision-based black-box attacks
The following experiments combine ODS with Boundary Attack.
Additional Requirement
Please install Foolbox, Python>=3.6
Evaluation:
The evaluation is held for 5 sample images on ImageNet (images are already resized and cropped).
# untargeted settings with ODS:
python blackbox_boundaryODS.py --num_sample 5 --ODS
# targeted settings with ODS:
python blackbox_boundaryODS.py --num_sample 5 --ODS --targeted
# untargeted settings with random sampling:
python blackbox_boundaryODS.py --num_sample 5
# targeted settings with random sampling:
python blackbox_boundaryODS.py --num_sample 5 --targeted
Acknowledgement
Our codes for Boundary Attack are based on Foolbox repo.
ODS for initialization of white-box attacks (ODI)
The following experiments combine ODI with PGD attack.
Training of target model (Adversarial Training):
python whitebox_train_cifar10.py --model-dir [PATH_TO_SAVE_FOLDER] --data-dir [PATH_TO_DATA_FOLDER]
Evaluation PGD attack with ODI:
# Evaluate PGD attack with ODI:
python whitebox_pgd_attack_cifar10_ODI.py --ODI-num-steps 2 --model-path [PATH_TO_THE_MODEL] --data-dir [PATH_TO_DATA_FOLDER]
# Evaluate PGD attack with naive random initialization (sampled from a uniform distribution):
python whitebox_pgd_attack_cifar10_ODI.py --ODI-num-steps 0 --model-path [PATH_TO_THE_MODEL] --data-dir [PATH_TO_DATA_FOLDER]
Acknowledgement
Our codes for white-box attacks are based on TRADES official repo.
Citation
If you use this code for your research, please cite our paper:
@inproceedings{tashiro2020ods,
title={Diversity can be Transferred: Output Diversification for White- and Black-box Attacks},
author={Tashiro, Yusuke and Song, Yang and Ermon, Stefano},
booktitle={Advances in Neural Information Processing Systems},
year={2020}
}