Home

Awesome

Official implementation of "Inference Attacks Against Graph Neural Networks" (USENIX Security 2022)

Overview of the Code

The code entry is main.py, it will invoke the experimental classes in 'exp/' folder to conduct different experiments. For example, if you run the code from command line

python main.py --attack 'graph_reconstruction'

Notice that the arguments are optional, you can specify the default values in the main.py file.

Code Structure

Directory Structure under temp_data

.
├── attack_data
│	├── graph_reconstruct
│	├── property_infer
│	├── property_infer_basic
│	└── subgraph_infer
├── defense_data
│	├── property_infer
│	└── subgraph_infer
├── gae_model
│	├── AIDS_diff_pool
│	├── AIDS_mean_pool
│	├── AIDS_mincut_pool
│	└── fine_tune
├── model
│	├── model_AIDS
│	├── para_AIDS
├── original_dataset
│	├── AIDS
├── split
│	└── 20
└── target_model
    └── 20

Citation

@inproceedings{zhang22usenix,
author = {Zhikun Zhang and Min Chen and Michael Backes and Yun Shen and Yang Zhang},
title = {{Inference Attacks Against Graph Neural Networks}},
booktitle = {USENIX Security Symposium (USENIX Security)},
year = {2022}
}