Awesome
Federated Learning
This is partly the reproduction of the paper of Communication-Efficient Learning of Deep Networks from Decentralized Data
Only experiments on MNIST and CIFAR10 (both IID and non-IID) is produced by far.
Note: The scripts will be slow without the implementation of parallel computing.
Requirements
python>=3.6
pytorch>=0.4
Run
The MLP and CNN models are produced by:
python main_nn.py
Federated learning with MLP and CNN is produced by:
python main_fed.py
See the arguments in options.py.
For example:
python main_fed.py --dataset mnist --iid --num_channels 1 --model cnn --epochs 50 --gpu 0
--all_clients
for averaging over all client models
NB: for CIFAR-10, num_channels
must be 3.
Results
MNIST
Results are shown in Table 1 and Table 2, with the parameters C=0.1, B=10, E=5.
Table 1. results of 10 epochs training with the learning rate of 0.01
Model | Acc. of IID | Acc. of Non-IID |
---|---|---|
FedAVG-MLP | 94.57% | 70.44% |
FedAVG-CNN | 96.59% | 77.72% |
Table 2. results of 50 epochs training with the learning rate of 0.01
Model | Acc. of IID | Acc. of Non-IID |
---|---|---|
FedAVG-MLP | 97.21% | 93.03% |
FedAVG-CNN | 98.60% | 93.81% |
Ackonwledgements
Acknowledgements give to youkaichao.
References
McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics (AISTATS), 2017.
Cite As
Shaoxiong Ji. (2018, March 30). A PyTorch Implementation of Federated Learning. Zenodo. http://doi.org/10.5281/zenodo.4321561