Home

Awesome

<div align="center"> <img src="imgs/logo.png" width="200px" /> </div>

ODSCAN: Backdoor Scanning for Object Detection Models

Python 3.8 Pytorch 1.13.0 Torchvision 0.14.0 CUDA 11.7 License MIT

Table of Contents

Overview

<img src="imgs/overview.png" width="900px"/>

Code Architecture

.
├── adaptived_nc_pixel        # Example baselines on TrojAI dataset
├── ckpt                      # Model checkpoints
├── data                      # Utilized data
│   ├── backgrounds           # Background images
│   ├── forgrounds            # Foreground images
│   ├── test                  # Test set of synthesis dataset
│   ├── train                 # Train set of synthesis dataset
│   ├── triggers              # Trigger patterns
│   └── fg_class_translation.json  # Image to class translation
├── dataset.py                # Dataset functions for training
├── poison_data.py            # Data-poisoning functions
├── scan_appearing            # Scanner against object appearing attacks
├── scan_misclassification    # Scanner against object misclassification attacks
├── train.py                  # Model training functions
└── utils.py                  # Utility functions

Environments

# Create python environment (optional)
conda env create -f environment.yml
source activate odscan

Requirement

Train an Object Detection Model with Backdoor

Data-poisoning

# Stamp the trigger on images and modify their annotations
CUDA_VISIBLE_DEVICES="0" python train.py --phase data_poison --data_folder data_poison --trigger_filepath data/triggers/0.png --victim_class 0 --target_class 3 --trig_effect misclassification --location foreground
ArgumentsDefault ValueDescription
phase"test"Specifies the mode of operation.
seed1024Random seed for reproducibility.
data_folder"data_poison"Directory for storing poisoned data.
examples_dir"data"Directory of clean data.
trigger_filepath"data/triggers/0.png"Path of the trigger pattern.
victim_class0Class of the victim object
target_class0Class of the target object
trig_effect"misclassification"Type of the backdoor attack
location"foreground"Stamp trigger on foreground or background
min_size16Minimum size of the trigger
max_size32Maximum size of the trigger
scale0.25Scale of the trigger compared to the victim object
# Stamp the trigger on images and modify their annotations
CUDA_VISIBLE_DEVICES="1" python train.py --trig_effect appearing --location background

Training

# Train a poisoned model
CUDA_VISIBLE_DEVICES="1" python train.py --phase train
Additional ArgsDefault ValueDescription
network"ssd"Model architecture.
num_classes5Number of classes.
epochs10Total number of training epochs.
batch_size32Batch size.
# Train a clean model
CUDA_VISIBLE_DEVICES="0" python train.py --phase poison

Evaluation

# Evaluate the model
CUDA_VISIBLE_DEVICES="0" python train.py --phase test
# Visualization of predictions
CUDA_VISIBLE_DEVICES="0" python train.py --phase visual

Backdoor Scanning by ODSCAN

# Detect object misclassification backdoor
CUDA_VISIBLE_DEVICES="0" python scan_misclassification.py --model_filepath ckpt/ssd_poison_misclassification_foreground_0_3.pt

# Detect object appearing backdoor
CUDA_VISIBLE_DEVICES="1" python scan_appearing.py --model_filepath ckpt/ssd_poison_appearing_background_0_3.pt
Critical ArgsDefault ValueDescription
n_samples5Number of samples used for scanning
trig_len32Inverted trigger length
save_folder"invert_misclassification"Directory for saving inverted trigger illustrations
iou_threshold0.5IoU threshold for object location
conf_threshold0.05Confidence threshold to filter out low-confidence anchors
epochs30Total number of steps for trigger inversion
topk3Top-k malicious classes to consider after preprocessing
verbose1Enable saving illustrations and logging details

Citation

Please cite our paper if you find it useful for your research.😀

@inproceedings{cheng2024odscan,
    title={ODSCAN: Backdoor Scanning for Object Detection Models},
    author={Cheng, Siyuan and Shen, Guangyu and Tao, Guanhong and Zhang, Kaiyuan and Zhang, Zhuo and An, Shengwei and Xu, Xiangzhe and Liu, Yingqi and Ma, Shiqing and Zhang, Xiangyu},
    booktitle={2024 IEEE Symposium on Security and Privacy (SP)},
    pages={119--119},
    year={2024},
    organization={IEEE Computer Society}
}

Acknowledgement