Home

Awesome

Sparsity Winning Twice: Better Robust Generalization from More Efficient Training

License: MIT

Code for the paper: [ICLR 2022] Sparsity Winning Twice: Better Robust Generalization From More Efficient Training

Tianlong Chen*, Zhenyu Zhang*, Pengjun Wang*, Santosh Balachandra*, Haoyu Ma*, Zehao Wang, Zhangyang Wang

Overview

In this paper, we investigate the problem of robust generalization from a new perspective, i.e., injecting appropriate forms of sparsity during adversarial training. Specifically:

Extensive experiments validate our proposals with multiple network architectures on diverse datasets, including CIFAR-10/100 and Tiny-ImageNet.

Experiment Results

Performance showing the effectiveness of our proposed approaches across different datasets with ResNet-18. The subnetworks at 80% sparsity are selected here.

Prerequisites

Usage

Dense:

python -u main_adv.py \
	--data [data/cifar10] \ 
	--dataset cifar10 \
	--arch resnet18 \
	--save_dir [experiment/dense] 

Robust Bird:

cd Robust-Bird
python train_eb.py \
	--stage1 sgd \
	--epochs1 200 \
	--stage2 pgd \
	--epochs2 200 \
	--arch resnet18 \
	--data data/cifar10 \
	--pruning unstructured \
	--density 0.2 \
	--save_dir [experiment/rb]

Flying Bird :

python -u main_adv.py \
	--data [data/cifar10] \ 
	--dataset cifar10 \
	--arch resnet18 \
	--save_dir [experiment/fb] \
	--density 0.2 \
	--dynamic_fre \
	--fb

Flying Bird+:

python -u main_adv.py \
	--data [data/cifar10] \ 
	--dataset cifar10 \
	--arch resnet18 \
	--save_dir [experiment/fb+] \
	--density 0.2 \
	--dynamic_fre \
	--fbp

Citation

@inproceedings{
chen2022sparsity,
title={Sparsity Winning Twice: Better Robust Generalization from More Efficient Training},
author={Tianlong Chen and Zhenyu Zhang and pengjun wang and Santosh Balachandra and Haoyu Ma and Zehao Wang and Zhangyang Wang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=SYuJXrXq8tw}
}