Home

Awesome

🔥Model Inversion Attack ToolBox v2.0🔥

Python 3.10 Pytorch 2.0.1 torchvision 0.15.2 CUDA 11.8

Yixiang Qiu*, Hongyao Yu*, Hao Fang*, Wenbo Yu, Bin Chen#, Xuan Wang, Shu-Tao Xia

Welcome to MIA! This repository is a comprehensive open-source Python benchmark for model inversion attacks, which is well-organized and easy to get started. It includes uniform implementations of advanced and representative model inversion methods, formulating a unified and reliable framework for a convenient and fair comparison between different model inversion methods. Our repository is continuously updated in https://github.com/ffhibnese/Model-Inversion-Attack-ToolBox.

If you have any concerns about our toolbox, feel free to contact us at qiuyixiang@stu.hit.edu.cn, yuhongyao@stu.hit.edu.cn, and fang-h23@mails.tsinghua.edu.cn.

Also, you are always welcome to contribute and make this repository better!

:rocket: Introduction

Model inversion attack is an emerging powerful private data theft attack, where a malicious attacker is able to reconstruct data with the same distribution as the training dataset of the target model.

The reason why we developed this toolbox is that the research line of MI suffers from a lack of unified standards and reliable implementations of former studies. We hope our work can further help people in this area and promote the progress of their valuable research.

:bulb: Features

:memo: Model Inversion Attacks

MethodPaperPublicationScenarioKey Characteristics
DeepInversionDreaming to Distill: Data-Free Knowledge Transfer via DeepInversionCVPR'2020whiteboxstudent-teacher, data-free
GMIThe Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural NetworksCVPR'2020whiteboxthe first GAN-based MIA, instance-level
KEDMIKnowledge-Enriched Distributional Model Inversion AttacksICCV'2021whiteboxthe first MIA that recovers data distributions, pseudo-labels
VMIVariational Model Inversion AttacksNeurIPS'2021whiteboxvariational inference, special loss function
SecretGenSecretGen: Privacy Recovery on Pre-trained Models via Distribution DiscriminationECCV'2022whitebox, blackboxinstance-level, data augmentation
BREPMILabel-Only Model Inversion Attacks via Boundary RepulsionCVPR'2022blackboxboundary repelling, label-only
MirrorMIRROR: Model Inversion for Deep Learning Network with High FidelityNDSS'2022whitebox, blackboxboth gradient-free and gradient-based, genetic algorithm
PPAPlug & Play Attacks: Towards Robust and Flexible Model Inversion AttacksICML'2022whiteboxInitial selection, pre-trained GANs, results selection
PLGMIPseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial NetworkAAAI'2023whiteboxpseudo-labels, data augmentation, special loss function
C2FMIC2FMI: Corse-to-Fine Black-box Model Inversion AttackTDSC'2023whitebox, blackboxgradient-free, two-stage
LOMMARe-Thinking Model Inversion Attacks Against Deep Neural NetworksCVPR'2023blackboxspecial loss, model augmentation
RLBMIReinforcement Learning-Based Black-Box Model Inversion AttacksCVPR'2023blackboxreinforcement learning
LOKTLabel-Only Model Inversion Attacks via Knowledge TransferNeurIPS'2023blackboxsurrogate models, label-only
IF-GMIA Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion AttacksECCV'2024whiteboxintermeidate feature

:memo: Model Inversion Defenses

MethodPaperPublicationKey Characteristics
VIB / MIDImproving Robustness to Model Inversion Attacks via Mutual Information RegularizationAAAI'2021variational method, mutual information, special loss function
BiDOBilateral Dependency Optimization: Defending Against Model-inversion AttacksKDD'2022special loss function
TLModel Inversion Robustness: Can Transfer Learning Help?CVPR'2024transfer learning
LSBe Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion AttacksICLR'2024label smoothing

:wrench: Environments

MIA can be built up with the following steps:

  1. Clone this repository and create the virtual environment by Anaconda.
git clone https://github.com/ffhibnese/Model_Inversion_Attack_ToolBox.git
cd ./Model_Inversion_Attack_ToolBox
conda create -n MIA python=3.10
conda activate MIA
  1. Install the related dependencies:
pip install -r requirements.txt

:page_facing_up: Preprocess Datasets and Pre-trained Models

See here for details to preprocess datasets.

We have released pre-trained target models and evaluation models in the checkpoints_v2.0 of Google Drive.

<!-- ## :racehorse: Run Examples See [here](./docs/) for details. --> <!-- ## :page_facing_up: Datasets and Model Checkpoints - For datasets, you can download them according to the file with detailed instructions placed in `./dataset/<DATASET_NAME>/README.md`. - For pre-trained models, we prepare all the related model weights files in the following link. Download pre-trained models [here](https://drive.google.com/drive/folders/1ko8zAK1j9lTSF8FMvacO8mCKHY9evG9L) and place them in `./checkpoints/`. The detailed file path structure is shown in `./checkpoints_structure.txt`. Genforces models will be automatically downloaded by running the provided scripts. ## :racehorse: Run Examples ### Attack We provide detailed running scripts of attack algorithms in `./attack_scripts/`. You can run any attack algorithm simply by the following instruction and experimental results will be produced in `./results/<ATTACK_METHOD>/` by default: ```sh python attack_scripts/<ATTACK_METHOD>.py ``` For more information, you can read [here](./attack_scripts/README.md). ### Defense We provide simple running scripts of defense algorithms in `./defense_scripts/`. To train the model with defense algorithms, you can run ```sh python defense_scripts/<DEFENSE_METHOD>.py ``` and training infos will be produced in `./results/<DEFENSE_METHOD>/<DEFENSE_METHOD>.log` by default. To evaluate the effectiveness of the defense, you can attack the model by running ```sh python defense_scripts/<DEFENSE_METHOD>_<ATTACK_METHOD>.py ``` and attack results will be produced in `./results/<DEFENSE_METHOD>_<ATTACK_METHOD>` by default. For more information, you can read [here](./defense_scripts/README.md). -->

📔 Citation

If you find our work helpful for your research, please kindly cite our papers:

@article{qiu2024mibench,
  title={MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense},
  author={Qiu, Yixiang and Yu, Hongyao and Fang, Hao and Yu, Wenbo and Chen, Bin and Wang, Xuan and Xia, Shu-Tao and Xu, Ke},
  journal={arXiv preprint arXiv:2410.05159},
  year={2024}
}

@article{fang2024privacy,
  title={Privacy leakage on dnns: A survey of model inversion attacks and defenses},
  author={Fang, Hao and Qiu, Yixiang and Yu, Hongyao and Yu, Wenbo and Kong, Jiawei and Chong, Baoli and Chen, Bin and Wang, Xuan and Xia, Shu-Tao},
  journal={arXiv preprint arXiv:2402.04013},
  year={2024}
}

@article{qiu2024closer,
  title={A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks},
  author={Qiu, Yixiang and Fang, Hao and Yu, Hongyao and Chen, Bin and Qiu, MeiKang and Xia, Shu-Tao},
  journal={arXiv preprint arXiv:2407.13863},
  year={2024}
}

:sparkles: Acknowledgement

We express great gratitude for all the researchers' contributions to the Model Inversion community.

In particular, we thank the authors of PLGMI for their high-quality codes for datasets, metrics, and three attack methods. It's their great devotion that helps us make MIA better!