Awesome
VaB
This a Pytorch implementation of our paper "The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a Clean Model on Poisoned Data", ICCV23, Oral, Best paper candidate.
Setup
Environments
Please install the required packages according to requirement.txt
Datasets
Download corresponding datasets and extract them to 'dataset'
- Original CIFAR-10 will be automatically downloaded during training. You can download modified data in Google Drive to implement "CL" and "Dynamic" attacks.
- The benign and poisoned ImageNet subset can be downloaded from Google Drive.
Usage
Run the following script to train a clean model on the poisoned data.
python Train_cifar10.py --trigger_type badnet --trigger_label 0 --trigger_path ./trigger/cifar10/cifar_1.png --posioned_portion 0.1 --model_name ResNet18
For the ImageNet subset, you can directly use the downloaded poisoned dataset or generate them by calling prepare_data.py.
python Train_Imagenet.py --trigger_type badnet --trigger_label 0 --trigger_path ./trigger/ImageNet/ImageNet_badnet.npy --posioned_portion 0.1 --model_name ResNet18 --BD_data_path (directory containing poisoned datasets)
Please modify the attack settings as you want.