Home

Awesome

Important Notice

This public repository is read-only and no longer maintained. For the latest sample code repositories, visit the SAP Samples organization.

Membership Inference Attacks against Variational Autoencoders

REUSE status

Description

This repository contains framework functions to create differentially private VAE models for image and location data. Furthermore it features logic for membership inference attacks against Variational Autoencoders [1,2].

Requirements (lowest tested version numbers)

Download and Installation

Explanation of the repository structure

Root Directory

In the root folder you can find most of the scripts. Everything here can be directly executed/opened and will make use of the underlying folder structure e.g. by importing code/configurations and writing files to folders.

Core

The core folder contains most of the python logic. Everything that has to do with model creating, training and loading is part of model.py while the logic for the attacks is contained in attack.py. Additionally in dataset.py one can find the code for loading and preprocessing different data sets.

Util

All additional code can be found in the util folder. Basic util functionality is contained in utilities.py. It is used in the core logic. Some functions for metric calculations are defined in metrics.py and used in the jupyter notebooks for the evaluation although the metric calculations are done directly in the notebooks.

Configs

This folder contains all kinds of configurations for both generative models and attacks against them. Furthermore one can find the templates for those configurations here.

Experiments

Log files from experiments are automatically saved here.

Logs

When a log is written during attacks (is activated as per default) they are written to this directory.

Models

The specifications of the created models are saved in separate folders within this directory. The folder name matches the name of the model configuration. It contains weight files, a hash that describes the training data composition and the training history.

How to create and attack models

1. Creating target models

In order to create baseline VAE models one should first create a config file in the configs directory. Depending on what model type you choose the created config should comply with the corresponding config template (VAE_config_template). In the next step either replace the config name in the model_script with your new config name (and execute the script). Or use the following code snippet in your own script/notebook:

from core.model import ModelContainer

mc = ModelContainer.create('your_model_config')
mc.load_data()
mc.create_model()
mc.train_model()
mc.save_model()

2. Creating attack models

To perform an attack on a created and trained model an attack config has to be written. Use Attacker_config_template as a basis. As when creating a model you can use the predefined attack_script (and fill in the name of your attack and model config) or embed the code in your own code. It only has to contain (at minimum) the following snippet:

from core.attack import Attacker

attacker = Attacker('your_attack_config', 'your_model_config')
attacker.prepare_attack()
attacker.perform_attack()

Authors / Contributors

Known Issues

There are no known issues.

How to obtain support

This project is provided "as-is" and any bug reports are not guaranteed to be fixed.

References

[1] Benjamin Hilprecht, Martin Härterich, and Daniel Bernau: Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models. Proceedings on Privacy Enhancing Technologies; 2019 (4):232–249 https://petsymposium.org/2019/files/papers/issue4/popets-2019-0067.pdf

[2] Benjamin Hilprecht, Martin Härterich, and Daniel Bernau: https://github.com/SAP-samples/security-research-membership-inference-against-generative-networks

License

Copyright (c) 2022 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the LICENSE file.