Awesome
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems
A PyTorch implementation of paper:
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems, Jiaxi Tang, Hongyi Wen and Ke Wang , RecSys '20
Requirements
- Python 3.5+
- Please check requirements.txt for required python packages.
Running experiment
Synthetic dataset
synthetic_exp.ipynb includes a self-contained implementation for:
- Generating synthetic data
- Experiments in the paper
Real-world dataset
Please refer to the following steps to run experiments on real-world dataset (i.e., Gowalla):
- Install required packages.
- Create a folder for experiment outputs (e.g., logs, model checkpoints, etc)
cd revisit_adv_rec mkdir outputs
- To generate fake data for attacking, change the configs in generate_attack_args.py (or leave as it is) then run:
You will find the fake data stored in thepython generate_attack.py
outputs/
folder, such asoutputs/Sur-ItemAE_fake_data_best.npz
- To inject fake data and evaluate the recommender performance after attack, modify the configs in evaluate_attack_args.py (you need to point the
fake_data_path
to your own) then run:python evaluate_attack.py
- To evaluate each victim model's performance without fake data (i.e.,
Without attack
in Figure 5(a)), setfake_data_path=None
in evaluate_attack_args.py then run:python evaluate_attack.py
Below are the logs obtained from using WRMF+SGD method for attack:
Citation
If you use the code in your paper, please cite the paper:
@inproceedings{tang2020revisit,
title={Revisiting Adversarially Learned Injection Attacks Against Recommender Systems},
author={Tang, Jiaxi and Wen, Hongyi and Wang, Ke},
booktitle={ACM Conference on Recommender Systems},
year={2020}
}