Home

Awesome

AgrEvader

AgrEvader: Poisoning Membership Inference against Byzantine-robust Federated Learning

This repository contains the code of the following paper:

Code structure

To run the experiments, please run setting_optimized.py, replace setting with the AgrEvader knowledge settings to experiment with. Here, we list two types of optimized attack based on the AgrEvader, Gray-box and Black-box. List of runnable script files

Other files in the repository

Instructions for running the experiments

1. Set the experiment parameters

The experiment parameters are defined in constants.py. To reproduce the results in the paper, please set the parameters as corresponding to the experiment settings in the experiment settings directory.

2. Run the experiment

To run the experiment, please run setting_optimized.py, replace setting with the AgrEvader knowledge settings to experiment with. You can use command line to run the experiment, e.g. in a LINUX environment, to execute the Black-box AgrEvader experiment, please input the following command under the source code path

python blackbox_optimized.py

To execute the Gray-box AgrEvader experiment, please input the following command under the source code path

python greybox_optimized.py

3. Save the experiment results

After the experiment is finished, the experiment results will be saved in the directory defined in constants.py . The experiment results include the AgrEvader attack log, the global model and local models log, and the FL training log.

Understanding the output

The result of the experiment will be saved in the directory defined in constants.py (default output directory), including

Requirements

Recommended to run with conda virtual environment

Reference

[1] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to byzantine-robust federated learning. In 29th {USENIX} Security Symposium ( {USENIX } Security 20). 1605–1622.

[2] Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Inter- 1246 national Conference on Machine Learning. PMLR, 5650–5659.