Home

Awesome

Learning the unlearnable (UEraser)

This repository contains the code of our paper Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks.

<img src="img/overview.png" width="800px">

General Usage of UEraser

import torch
import torchvision as tv
from ueraser import adversarial_augmentation_loss
# model
model = ...  # a PyTorch model
# optimizer
optimizer = torch.optim.SGD(
    model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
dataset = ...  # an unlearnable dataset
dataloader = torch.utils.data.DataLoader(
    dataset, batch_size=128, shuffle=True, num_workers=2)
max_epochs = 200  # the number of total training epochs
aa_epochs = 60  # the number of epochs for adversarial augmentations
repeat = 5  # The number of repeated sampling
# UEraser
for e in range(max_epochs):
    for images, labels in dataloader:
        optimizer.zero_grad()
        images, labels = images.to(device), labels.to(device)
        r = repeat if e < aa_epochs else 1
        loss = adversarial_augmentation_loss(model, images, labels, r)
        loss.backward()
        optimizer.step()

Requirements:

Please first install python>=3.10 and the following packages:

pip install torch numpy kornia scikit-learn einops

Quick start:

We provide an example of UEraser on CIFAR-10 poisons generated by EM and LSP.

EM:

The detailed instructions are available in EM/QuickStart.ipynb.

LSP:

Go to LSP subfolder:

cd LSP/

Here are some example commands to test UEraser on LSP poisons.

Classification performance of UEraser when training on LSP CIFAR-10 with ResNet18:

CUDA_VISIBLE_DEVICES=0 python cifar_train.py --model <model> --dataset <dataset> --mode <mode> --type <type>

The parameter choices for the above commands are as follows:

Cite our paper:

@article{qin2023learning,
  title={Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks},
  author={Qin, Tianrui and Gao, Xitong and Zhao, Juanjuan and Ye, Kejiang and Xu, Cheng-Zhong},
  journal={arXiv preprint arXiv:2303.15127},
  year={2023}
}
@inproceedings{qin2023iccvw,
  title={Learning the unlearnable: Adversarial augmentations suppress unlearnable example attacks},
  author={Qin, Tianrui and Gao, Xitong and Zhao, Juanjuan and Ye, Kejiang and Xu, Cheng-Zhong},
  booktitle={4th Workshop on Adversarial Robustness In the Real World (AROW), ICCV 2023},
  url={https://iccv23-arow.github.io/pdf/arow-0025.pdf},
  year={2023}
}

Acknowledgement:

Training code adapted from EM and LSP repository EM-repository and LSP-repository.