Home

Awesome

LoDen

Cite LoDen

If you use LoDen for research, please cite the following paper:

@article{Ma2023LoDen,
title={LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks},
author={Ma, Mengyao and Zhang, Yanjun and Arachchige, Pathum Chamikara Mahawaga and Zhang, Leo Yu and Baruwal Chhetri, Mohan and Bai, Guangdong},
booktitle={18th ACM ASIA Conference on Computer and Communications Security {ASIACCS 2023'}},
year={2023},
publisher={ACM}
}

Code structure

To run the experiments, please run setting_optimized.py, replace setting with the LoDen knowledge settings to experiment with. Here, there are three key runable files in the repository, including

Other files in the repository

Instructions for running the experiments

1. Set the experiment parameters

The experiment parameters are defined in constants.py. To execute the experiments, please set the parameters in constants.py file.

2. Run the experiment

To run the experiment, please run loden_setting_defence.py, replace setting with the LoDen knowledge settings to experiment with. You can use command line to run the experiment, e.g. in a LINUX environment, to execute the Black-box LoDen experiment, please input the following command under the source code path

python loden_blackbox_defence.py

To execute the whitebox LoDen experiment, please input the following command under the source code path

python loden_whitebox_defence.py

To execute the MIA baseline experiment, please input the following command under the source code path

python loden_MIA.py

3. Save the experiment results

After the experiment is finished, the experiment results will be saved in the directory defined in constants.py . The experiment results include the Local nodes log, Attacker log, LoDen defence log and the FL training log.

Understanding the output

The result of the experiment will be saved in the directory defined in constants.py (default output directory), including

Requirements

Recommended to run with conda virtual environment

Reference

[1] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to byzantine-robust federated learning. In 29th {USENIX} Security Symposium ( {USENIX } Security 20). 1605–1622.

[2] Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Inter- 1246 national Conference on Machine Learning. PMLR, 5650–5659.

[3] Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In 28th Network and Distributed Systems Security (NDSS) Symposium 2021

[4] Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer.2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems