Home

Awesome

License: MIT

ACE:

This is the codebase for the CVPR 2023 paper Adversarial Counterfactual Visual Explanations.

Environment

Through anaconda, install our environment:

conda env create -f environment.yaml
conda activate ace

Downloading pre-trained models

To use ACE, you must download the pretrained DDPM models. Please extract them to a folder of choice /path/to/models. We provide links and instructions to download all models.

Download Link:

Generating Adversarial Counterfactual Explanations

To generate counterfactual explanations, use the main.py python script. We added a commentary to every possible flag for you to know what they do. Nevertheless, in the script folder, we provided several scripts to generate all counterfactual explanations using our proposed method: ACE.

We follow the same folder ordering as DiME. Please see all details in DiME's repository. Similarly, we took advantage of their multi-chunk processing -- more info in DiME's repo.

To reduce the GPU burden, we implemented a checkpoint strategy to enable counterfactual production on a reduced GPU setup. --attack_joint_checkpoint True sets this modality on. Please check this repo for a nice explanation and visualization. The flag --attack_checkpoint_backward_steps n uses n DDPM iterations before computing the backward gradients. It is 100% recommended to use a higher --attack_checkpoint_backward_steps value and a batch size of 1 than --attack_checkpoint_backward_steps 1 and a larger batch size!!!

When you finished processing all counterfactual explanations, we store the counterfactual and the pre-explanation. You can easily re-process the pre-explanations using the postprocessing.py python script.

Evaluating ACE

We provided a generic code base to evaluate counterfactual explanation methods. All evaluation script filenames begin with compute. Please look at their arguments on each individual script. Notes:

Citation

Is you found our code useful, please cite our work:

@inproceedings{Jeanneret_2023_CVPR,
    author    = {Jeanneret, Guillaume and Simon, Lo\"ic and Fr\'ed\'eric Jurie},
    title     = {Adversarial Counterfactual Visual Explanations},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023}
}

Code Base

We based our repository on our previous work Diffusion Models for Counterfactual Explanations.