Awesome
Advbox Family
Advbox Family is a series of AI model security tools set of Baidu Open Source,including the generation, detection and protection of adversarial examples, as well as attack and defense cases for different AI applications.
Advbox Family support Python 3.*.
Our Work
-
Tracking the Criminal of Fake News Based on a Unified Embedding. Blackhat Asia 2020
-
Attacking and Defending Machine Learning Applications of Public Cloud. Blackhat Asia 2020
-
COMMSEC: Tracking Fake News Based On Deep Learning. HITB GSEC 2019
-
COMMSEC: Hacking Object Detectors Is Just Like Training Neural Networks. HITB GSEC 2019 | See code
-
COMMSEC: How to Detect Fake Faces (Manipulated Images) Using CNNs. HITB GSEC 2019
AdvSDK
A Lightweight Adv SDK For PaddlePaddle to generate adversarial examples.
AdversarialBox
Adversarialbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models.Advbox give a command line tool to generate adversarial examples with Zero-Coding. It is inspired and based on FoolBox v1.
AdvDetect
AdvDetect is a toolbox to detect adversarial examples from massive data.
AdvPoison
Data poisoning
AI applications
Face Recognition Attack
Homepage of Face Recognition Attack
Stealth T-shirt
On defcon, we demonstrated T-shirts that can disappear under smart cameras. Under this sub-project, we open-source the programs and deployment methods of smart cameras for demonstration.
Fake Face Detect
The restful API is used to detect whether the face in the picture/video is a false face.
Paper and ppt of Advbox Family
How to cite
If you use AdvBox in an academic publication, please cite as:
@misc{goodman2020advbox,
title={Advbox: a toolbox to generate adversarial examples that fool neural networks},
author={Dou Goodman and Hao Xin and Wang Yang and Wu Yuesheng and Xiong Junfeng and Zhang Huan},
year={2020},
eprint={2001.05574},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Cloud-based Image Classification Service is Not Robust to Affine Transformation: A Forgotten Battlefield
@inproceedings{goodman2019cloud,
title={Cloud-based Image Classification Service is Not Robust to Affine Transformation: A Forgotten Battlefield},
author={Goodman, Dou and Hao, Xin and Wang, Yang and Tang, Jiawei and Jia, Yunhan and Wei, Tao and others},
booktitle={Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop},
pages={43--43},
year={2019},
organization={ACM}
}
Who use/cite AdvBox
- Wu, Winston and Arendt, Dustin and Volkova, Svitlana; Evaluating Neural Model Robustness for Machine Comprehension; Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021, pp. 2470-2481
- Pablo Navarrete Michelini, Hanwen Liu, Yunhua Lu, Xingqun Jiang; A Tour of Convolutional Networks Guided by Linear Interpreters; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4753-4762
- Ling, Xiang and Ji, Shouling and Zou, Jiaxu and Wang, Jiannan and Wu, Chunming and Li, Bo and Wang, Ting; Deepsec: A uniform platform for security analysis of deep learning model ; IEEE S&P, 2019
- Deng, Ting and Zeng, Zhigang; Generate adversarial examples by spatially perturbing on the meaningful area; Pattern Recognition Letters[J], 2019, pp. 632-638
Issues report
https://github.com/baidu/AdvBox/issues
License
AdvBox support Apache License 2.0