Home

Awesome

Model Extraction Attacks against Graph Neural Network

The source code for AsiaCCS2022 paper: "Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization ". The paper can be found in https://arxiv.org/abs/2010.12751

If you make use of this code in your work, please cite the following paper:

<pre> @inproceedings{wypy2022meagnn, title={Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization}, author={Bang, Wu and Xiangwen, Yang and Shirui, Pan and Xingliang, Yuan}, booktitle = {{ASIA} {CCS} '22: {ACM} Asia Conference on Computer and Communications Security, Nagasaki, May 30 - June 3, 2022}, year={2022}, publisher = {{ACM}} } </pre>

Enviroments Requires

Usage

Parameters

Please specify the attack you propose to run. They are list as the following table:

Attack TypesNode AttributeGraph StructureShadow Dataset
Attack-0Partially KnownPartially KnownUnknown
Attack-1Partially KnownUnknownUnknown
Attack-2UnknownKnownUnknown
Attack-3UnknownUnknownKnown
Attack-4Partially KnownPartially KnownKnown
Attack-5Partially KnownUnknownKnown
Attack-6UnknownKnownKnown

Please specify the dataset among Cora, Citeseer, Pubmed for your target model training.

Please specify the proportion of the nodes obtained by the adversary.

For attacks with knowledge about the shadow dataset, please specify the size of shadow dataset comparing with training dataset as ratios. For example, for shadow dataset with half size as the target dataset, please use 0.5.

Example

For runing the attack-0 in Cora with 25% attack nodes obtained by the adversary, you can run the comment as:

python main.py --attack_type 0 --dataset cora --attack_node 0.25

If you have any questions, please send an email to us.