Home

Awesome

Introduction

This is the implementation of our paper Eliminating Domain Bias for Federated Learning in Representation Space (accepted by NeurIPS 2023). It can improve bi-directional knowledge transfer between the server and clients. We show the code of the representative FedAvg+DBE (FedDBE).

Takeaway: By eliminating domain bias in the feature extractor, we address catastrophic forgetting during local training, enhancing the generalization ability. Consequently, the global module can swiftly adapt to a new client.

Citation

@article{zhang2023eliminating,
  title={Eliminating domain bias for federated learning in representation space},
  author={Zhang, Jianqing and Hua, Yang and Cao, Jian and Wang, Hao and Song, Tao and Xue, Zhengui and Ma, Ruhui and Guan, Haibing},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2023}
}

Datasets and Environments

Due to the file size limitation, we only upload the fmnist dataset with the default practical setting ($\beta=0.1$). Please refer to our project PFLlib for other datasets and environments settings.

System

Training and Evaluation

All codes corresponding to FedDBE are stored in ./system. Just run the following commands.

cd ./system
sh run_me.sh