Home

Awesome

Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting

paper

paper address: https://arxiv.org/abs/2104.10868

This work is done when working as the research asistant in Big Data Security and Information Itelligence Lab, supervised by Prof.Pan Zhou. And we corperate with the researchers in Duke University (Dr.Wang and Dr.Li). From my perspectives, the most important problem in the robustness of regression modeling (not just the crowd counting models) is how to define the robustness evaluation metrics? I have already given some answers but I think this is still the open problem (actually, researchers still use the specific evaluation metrics on specific tasks like miou in semantic segmentations). Therefore, we pick the crowd counting models to find some possible answers and the tight-MAE/MSE evaluation metric is relatively general across the vision models (we give the more detailed discussions in our latest paper submitted to BMVC 2021).

Requirement

  1. Install pytorch 1.4.0+
  2. Python 3.7+

Data Setup

follow the MCNN or CSRNet repo's steps to build the dataset MCNN CSRNet

Download ShanghaiTech Dataset from Dropbox or Baidu Disk (code: a2v8)

Attacked Models

CSRNet: https://github.com/CommissarMa/CSRNet-pytorch

CAN: https://github.com/CommissarMa/Context-Aware_Crowd_Counting-pytorch

MCNN: https://github.com/svishwa/crowdcount-mcnn

CMTL: https://github.com/svishwa/crowdcount-cascaded-mtl

DA-Net: https://github.com/BigTeacher-777/DA-Net-Crowd-Counting

Thanks for these researchers sharing the codes!

How to Attack?

Please run the python file patch_attack.py

How to Retrain the Crowd Counting Models?

python3 MCNN_adv_train.py (adversarial training with the generated patch (pristine version)) python3 MCNN_certify_train.py (certificate training MCNN via randomized ablation)

Want to Gain the Certificate Retrained Models and Want More Disscussions on the Robustness Problem on Regression Learning?

The latest ## suppplementary materials ##:
dropbox: dropbox

The certificate retrained crowd counting models:
dropbox: dropbox

Reference

If you find the paper is useful, please cite :

@article{wu2021towards,
  title={Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting},
  author={Wu, Qiming and Zou, Zhikang and Zhou, Pan and Ye, Xiaoqing and Wang, Binghui and Li, Ang},
  journal={arXiv preprint arXiv:2104.10868},
  year={2021}
}

More Details

"I want to try the randomized ablation method on my model and dataset"

python3 get_ablated_img.py

(when k is larger, the consuming time is longer ^_^)

python3 MCNN_certify_train.py

Reference

If you find the paper is useful, please cite :

@article{wu2021towards,
  title={Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting},
  author={Wu, Qiming and Zou, Zhikang and Zhou, Pan and Ye, Xiaoqing and Wang, Binghui and Li, Ang},
  journal={arXiv preprint arXiv:2104.10868},
  year={2021}
}