Home

Awesome

tRFA: Robust Aggregation for Federated Learning in PyTorch

Please see here for a TensorFlow (1.x) implementation of RFA.

Please see here for a self-contained implementation of the geometric median.

This code provides a PyTorch implementation of robust aggregation algorithms for federated learning. This codebase provides scripts to reproduce the experimental results in the paper Robust Aggregation for Federated Learning.

If you use this code, please cite the paper using the Bibtex reference below

@article{pillutla2022robust,
  author={Pillutla, Krishna and Kakade, Sham M. and Harchaoui, Zaid},
  journal={IEEE Transactions on Signal Processing}, 
  title={{Robust Aggregation for Federated Learning}}, 
  year={2022},
  volume={70},
  number={},
  pages={1142-1154},
  doi={10.1109/TSP.2022.3153135}
}

This code is based on this repository, which in turn was based on a fork of the Leaf benchmark suite. Please consider citing the papers for these repositories if you find this codebase useful. A more modern and scalable PyTorch-based federated learning simulation can also be found here.

Introduction

Federated Learning is a paradigm for training centralized machine learning models on data distributed over a large number of devices such as mobile phones. A typical federated learning algorithm consists of local computation on some of the devices followed by secure aggregation of individual device updates to update the central model.

The accompanying paper describes a robust aggregation approach to make federated learning robust to settings when a fraction of the devices may be sending corrupted updates to the server.

This code compares the RobustFedAgg algorithm proposed in the accompanying paper to the FedAvg algorithm (McMahan et. al. 2017). The code has been developed from a fork of Leaf, commit 51ab702af932090b3bd122af1a812ea4da6d8740.

Installation

This code is written in Python 3.8 and has been tested on PyTorch 1.4+. A conda environment file is provided in rfa.yml with all dependencies except PyTorch. It can be installed by using conda as follows

conda env create -f rfa.yml 

Installing PyTorch: Instructions to install a PyTorch compatible with the CUDA on your GPUs (or without GPUs) can be found here.

Data Setup

  1. Sent140
time ./preprocess.sh -s niid --sf 1.0 -k 100 -t sample -tf 0.8
  1. EMNIST (Called FEMNIST in the codebase)
time ./preprocess.sh -s niid --sf 1.0 -k 100 -t sample

NOTE: The EMNIST experiments in the paper were produced using the TensorFlow implementation of RFA, so please use that repository to exactly reproduce the results.

  1. Shakespeare
time ./preprocess.sh -s niid --sf 1.0 -k 100 -t sample -tf 0.8

NOTE: The Shakespeare experiments in the paper were produced using the TensorFlow implementation of RFA, so please use that repository to exactly reproduce the results.

Reproducing Experiments in the Paper

Once the data has been set up, the scripts provided in the folder models/scripts/ can be used to reproduce the experiments in the paper.

Change directory to models and run the scripts as

./scripts/sent140/gm.sh  # run geometric median experiments