Home

Awesome

Class-Conditioned Transformation for Enhanced Robust Image Classification

Authors: Tsachi Blau, Roy Ganz, Chaim Baskin, Michael Elad and Alex M. Bronstein


Adversarial attacks pose a major challenge in computer vision, where small, nearly imperceptible modifications to input images can mislead classifiers into making incorrect predictions. CODIP addresses this problem by enhancing the robustness of adversarially trained (AT) models against both known and novel adversarial threats. The approach introduces two primary mechanisms:

Detailed Method Explanation

Below are qualitative examples demonstrating CODIP's effectiveness.

Example Generation for Target Class

Installation

  1. Clone this repository:

    git clone https://github.com/yourusername/robust-image-classification
    cd robust-image-classification
    
  2. Install Pytorch:

    conda install pytorch==2.1.2 torchvision==0.16.2 pytorch-cuda=12.1 -c pytorch -c nvidia
    
  3. Install dependencies:

    pip install -r requirements.txt
    

Usage

This repository supports two main flows:

Evaluating Performance with CODIP

This flow applies CODIP to evaluate the robustness of adversarially trained (AT) models against adversarial attacks. The evaluation can be configured with different attack types, including:

This evaluation flow can be run by setting --flow eval in the command line and specifying the attack parameters.

Example:

python main.py --flow eval \
--dataset cifar10 \
--model_path path_to_model \
--attack_threat_model L2 \
--attack_epsilon 0.5 \
--attack_num_steps 20

Creating Attacked Images using AutoAttack

This flow generates adversarial examples for a specified dataset using the AutoAttack framework and saves them for later evaluation. This is useful for testing robustness against pre-generated attacks, as the saved adversarial examples can be reloaded for consistent evaluation.

To run this flow, set --flow create_aa in the command line, along with specifying paths for saving the adversarial examples and labels.

Example:

python main.py --flow create_aa \
--dataset cifar10 \
--model_path path_to_model \
--aa_dataset_path path_to_aa_dataset \
--aa_labels_path path_to_aa_dataset_labels \
--attack_threat_model L2 \
--attack_epsilon 0.5 

Parameters

General

Attack

CODIP

Main Results

Citation

If you find this work helpful, please consider citing our paper:

@article{blau2023codip,
  title={Class-Conditioned Transformation for Enhanced Robust Image Classification},
  author={Blau, Tsachi and Ganz, Roy and Baskin, Chaim and Elad, Michael and Bronstein, Alex M.},
  journal={arXiv preprint arXiv:2303.15409},
  year={2023}
}