Home

Awesome

Cross-modal Adversarial Reprogramming

Code for our WACV 2022 paper Cross-modal Adversarial Reprogramming.

Installation

<!-- Sample Reprogrammer Checkpoint: https://drive.google.com/file/d/1xn3zm0DmNNVPEHb_fFLAWRoMjNpb-nIx/view?usp=sharing Classification Datasets: https://archive.ics.uci.edu/ml/datasets.php?format=&task=cla&att=&area=&numAtt=&numIns=&type=seq&sort=attTypeUp&view=table -->

Running the Experiments

The text/sequence dataset configurations are defined in data_utils.py. We can either use text-classification datasets available in the huggingface hub or use our custom datasets (defined as json files) with the same API. To reprogram an image model for a text classification task run:

CUDA_VISIBLE_DEVICES=0 python reprogramming.py --text_dataset TEXTDATSET --logdir <PATH WHERE CKPTS/TB LOG WILL BE SAVED> --cache_dir <PATH WHERE HF CACHE WILL BE CREATED> --reg_alpha 1e-4 --pretrained_vm 1 --resume_training 1 --use_char_tokenizer 0 --img_patch_size 16 --vision_model tf_efficientnet_b4;

Once the model is trained, you may use the InferenceNotebook.ipynb notebook to visualize the reprogrammed images etc. Accuracy and other metrics on the test set are logged in tensorboard during training.

Citing our work

@inproceedings{neekhara2022crossmodal,
  title={Cross-modal Adversarial Reprogramming},
  author={Neekhara, Paarth and Hussain, Shehzeen and Du, Jinglong and Dubnov, Shlomo and Koushanfar, Farinaz and McAuley, Julian },
  booktitle={WACV},
  year={2022}
}