Home

Awesome

Are Diffusion Models Vulnerable to Membership Inference Attacks? [ICML 2023]

arXiv

This is the official implementation of the paper "Are Diffusion Models Vulnerable to Membership Inference Attacks?". The proposed Step-wise Error Comparing Membership Inference (SecMI) is implemented in this codebase.

[3/4/2024] We have released SecMI-LDM: SecMI on Latent Diffusion Models. Since SecMI requires intermediate results, we modified the original diffuser and put all the data/checkpoints in another repo.

Model Training

This codebase is built on top of pytorch-ddpm. Please follow its instructions for model training. You can also run the following commands (or refer to train.sh):

python main.py --train --logdir ./experiments/CIFAR10 \
--dataset CIFAR10 --img_size 32 --batch_size 128 --fid_cache ./stats/cifar10.train.npz --total_steps 800001

By default, it will load the splittings stored in mia_evals/member_splits and train DDPMs over half training split. You can specify --dataset and --total_steps as you want.

Pre-trained model

Some pre-trained models can be downloaded from here.

Run SecMI

To execute SecMI over pretrained DDPM, please execute the following command:

python secmia.py --model_dir /path/to/model_dir --dataset_root /path/to/dataset --dataset cifar10 --t_sec 100 --k 10

parameters:

Please cite our paper if you feel this is helpful:

@InProceedings{duan2023are,
  title = {Are Diffusion Models Vulnerable to Membership Inference Attacks?},
  author = {Duan, Jinhao and Kong, Fei and Wang, Shiqi and Shi, Xiaoshuang and Xu, Kaidi},
  booktitle = {Proceedings of the 40th International Conference on Machine Learning},
  pages = {8717--8730},
  year = {2023}
}