Awesome
Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness [NeurIPS 2022]
This code allows to finetune the explainability maps of Vision Transformers to enhance robustness.
HuggingFace space + Colab notebook to run examples of the finetuned vs the original models:
Updates:
06/05/2022 Added a HuggingFace Spaces demo:
<p align="center"> <img src="hf_spaces.png"> </p>Method overview:
The method employs loss functions directly to the explainability maps to ensure that the model is focused mostly on the foreground of the image:
<p align="center"> <img width="500" height="400" src="teaser.png"> </p> Using a short finetuning process with only 3 labeled examples from 500 classes, our method improves robustness of ViT models across different model sizes and training techniques, even when data augmentations/ regularization are applied.Model zoo
Below are links to download finetuned models for the base models of ViT AugReg (this is also the model that appears on timm), vanilla ViT, and DeiT. These are also the weights used in our colab notebook.
Path | Description |
---|---|
AugReg-B | Finetuned ViT Augreg base model. |
ViT-B | Finetuned vanilla ViT base model. |
DeiT-B | Finetuned DeiT base model. |
Requirements
pytorch==1.7.1
torchvision==0.8.2
timm==0.4.12
Producing Segmentation Data
Using ImageNet-S
To use the ImageNet-S labeled data, download the ImageNetS919
dataset
Using TokenCut for unsupervised segmentation
- Clone the TokenCut project
git clone https://github.com/YangtaoWANG95/TokenCut.git
- Install the dependencies
Python 3.7, PyTorch 1.7.1, and CUDA 11.2. Please refer to the official installation. If CUDA 10.2 has been properly installed:
Followed by:pip install torch==1.7.1 torchvision==0.8.2
pip install -r TokenCut/requirements.txt
- Use the following command to extract the segmentation maps:
python tokencut_generate_segmentation.py --img_path <PATH_TO_IMAGE> --out_dir <PATH_TO_OUTPUT_DIRECTORY>
Finetuning ViT models
To finetune a pretrained ViT model use the imagenet_finetune.py
script. Notice to uncomment the import line containing the pretrained model you
wish to finetune.
Usage example:
python imagenet_finetune.py --seg_data <PATH_TO_SEGMENTATION_DATA> --data <PATH_TO_IMAGENET> --gpu 0 --lr <LR> --lambda_seg <SEG> --lambda_acc <ACC> --lambda_background <BACK> --lambda_foreground <FORE>
Notes:
- For all models we use :
lambda_seg=0.8
lambda_acc=0.2
lambda_background=2
lambda_foreground=0.3
- For DeiT models, a temperature is required as follows:
temperature=0.65
for DeiT-Btemperature=0.55
for DeiT-S
- The learning rates per model are:
- ViT-B: 3e-6
- ViT-L: 9e-7
- AR-S: 2e-6
- AR-B: 6e-7
- AR-L: 9e-7
- DeiT-S: 1e-6
- DeiT-B: 8e-7
Baseline methods
Notice to uncomment the import line containing the pretrained model you wish to finetune in the code.
GradMask
Run the following command:
python imagenet_finetune_gradmask.py --seg_data <PATH_TO_SEGMENTATION_DATA> --data <PATH_TO_IMAGENET> --gpu 0 --lr <LR> --lambda_seg <SEG> --lambda_acc <ACC>
All hyperparameters for the different models can be found in section D of the supplementary material.
Right for the Right Reasons
Run the following command:
python imagenet_finetune_rrr.py --seg_data <PATH_TO_SEGMENTATION_DATA> --data <PATH_TO_IMAGENET> --gpu 0 --lr <LR> --lambda_seg <SEG> --lambda_acc <ACC>
All hyperparameters for the different models can be found in section D of the supplementary material.
Evaluation
Robustness Evaluation
-
Download the evaluation datasets:
-
Run the following script to evaluate:
python imagenet_eval_robustness.py --data <PATH_TO_ROBUSTNESS_DATASET> --batch-size <BATCH_SIZE> --evaluate --checkpoint <PATH_TO_FINETUNED_CHECKPOINT>
- Notice to uncomment the import line containing the pretrained model you wish to evaluate in the code.
- To evaluate the original model simply omit the
checkpoint
parameter. - For the INet-v2 dataset add
--isV2
. - For the ObjectNet dataset add
--isObjectNet
. - For the SI datasets add
--isSI
.
Segmentation Evaluation
Our segmentation tests are based on the test in the official implementation of Transformer Interpretability Beyond Attention Visualization.
- Download the ImageNet segmentation test set.
- Run the following script to evaluate:
PYTHONPATH=./:$PYTHONPATH python SegmentationTest/imagenet_seg_eval.py --imagenet-seg-path <PATH_TO_gtsegs_ijcv.mat>
- Notice to uncomment the import line containing the pretrained model you wish to evaluate in the code.
Credits
- The TokenCut code is built on top of LOST, DINO, Segswap, and Bilateral_Sovlver.
- Our ViT code is based on the pytorch-image-models repository.
- Our ImageNet finetuning code is based on code from the official PyTorch repo.
- The code to convert ObjectNet classes to ImageNet classes was taken from the torchprune repo.
- The code to convert SI-Score classes to ImageNet classes was taken from the official implementation.
We would like to sincerely thank the authors for their great works.
Citing our paper
If you make use of our work, please cite our paper:
@inproceedings{
chefer2022optimizing,
title={Optimizing Relevance Maps of Vision Transformers Improves Robustness},
author={Hila Chefer and Idan Schwartz and Lior Wolf},
booktitle={Thirty-Sixth Conference on Neural Information Processing Systems},
year={2022},
url={https://openreview.net/forum?id=upuYKQiyxa_}
}