Awesome
Adversarial Attacks on Node Embeddings via Graph Poisoning
<p align="center"> <img src="https://www.in.tum.de/fileadmin/w00bws/daml/node_attack/node_attack.png"> </p>Preliminary reference implementation of the attack proposed in the paper:
"Adversarial Attacks on Node Embeddings via Graph Poisoning",
Aleksandar Bojchevski and Stephan Günnemann, ICML 2019.
Requirements
- gensim
- tensorflow
- sklearn (only for evaluation)
Example
The notebook example.ipynb shows an example of our general attack and comparison with the baselines.
Cite
Please cite our paper if you use this code in your own work:
@inproceedings{bojchevski2019adversarial,
title = {Adversarial Attacks on Node Embeddings via Graph Poisoning},
author = {Aleksandar Bojchevski and Stephan G{\"{u}}nnemann},
booktitle ={Proceedings of the 36th International Conference on Machine Learning, {ICML}},
year = {2019},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
}