Home

Awesome

Event Trojan

Event Trojan: Asynchronous Event-based
Backdoor Attacks

This repository is the officially implemented event trojan described in Wang et al. ECCV'24. The paper can be found here. Due to its large file size, reviewing the paper on arXiv is quite slow.

If you use this code in an academic context, please cite the following work:

Ruofei Wang, Qing Guo,Haoliang Li, Renjie Wan, "Event Trojan: Asynchronous Event-based Backdoor Attacks", The European Conference on Computer Vision (ECCV), 2024.

Framework

@InProceedings{Wang_2024_ECCV,
  author = {Ruofei Wang and Qing Guo and Haoliang Li and Renjie Wan},
  title = {Event Trojan: Asynchronous Event-based Backdoor Attacks},
  booktitle = {Euro. Conf. Comput. Vis. (ECCV)},
  month = {September},
  year = {2024}
}

Requirements

Dependencies

Create a conda environment with python3.6 and activate it:

conda create -n event_trojan python=3.6
coinda activate event_trojan

Install all dependencies by calling:

pip install -r requirements.txt

Training

Before training, download the N-Caltech101 and N-Cars datasets and unzip them:

wget http://rpg.ifi.uzh.ch/datasets/gehrig_et_al_iccv19/N-Caltech101.zip 
unzip N-Caltech101.zip

# https://www.prophesee.ai/2018/03/13/dataset-n-cars  (N-Cars)

Then start training by calling

python main_iet.py --training_dataset N-Caltech101/training/ --validation_dataset N-Caltech101/validation/ --log_dir log/iet --device cuda:0

Here, training_dataset and validation_dataset should point to the folders where the training and validation sets are stored. log_dir controls logging and device controls on which device you want to train. Checkpoints and models with lowest validation loss will be saved in the root folder of log_dir.

Additional parameters

Visualization

Training can be visualized by calling tensorboard:

tensorboard --logdir log/iet

Training and validation losses as well as classification accuracies are plotted.

Testing

Once trained, the models can be tested by calling the following script:

python testing_iet.py

Which will print the test score after iteration through the whole dataset. ASR and CDA can be evaluated with the poison ratio by 1.0 and 0.0, respectively.

Details about the used event representations in our paper can be found at (https://github.com/uzh-rpg/rpg_event_representation_learning), (https://github.com/LarryDong/event_representation). Thanks them.