Home

Awesome

Introduction

This is the implementation of our paper FedALA: Adaptive Local Aggregation for Personalized Federated Learning (accepted by AAAI 2023). An extended version (derivation of Equation (6), hyperparameter settings, etc.) can be found at https://arxiv.org/pdf/2212.01197v4.pdf.

Citation

@inproceedings{zhang2023fedala,
  title={Fedala: Adaptive local aggregation for personalized federated learning},
  author={Zhang, Jianqing and Hua, Yang and Wang, Hao and Song, Tao and Xue, Zhengui and Ma, Ruhui and Guan, Haibing},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={37},
  number={9},
  pages={11237--11244},
  year={2023}
}

Dataset

Here, we only upload the mnist dataset in the default heterogeneous setting with Dir(0.1) for example. You can generate other datasets following PFLlib.

System

Adaptive Local Aggregation (ALA) module

./system/utils/ALA.py is the implementation of the ALA module, which corresponds to the pseudocode from line 6 to line 16 in Algorithm 1 in our paper. You can easily apply the ALA module to other federated learning (FL) methods by importing it as a Python module.

Details

Illustrations

How to use

import ALA
class Client(object):
    def __init__(self, ...):
        # other code
        self.ALA = ALA(self.id, self.loss, self.train_data, self.batch_size, 
                    self.rand_percent, self.layer_idx, self.eta, self.device)
        # other code
class Client(object):
    def __init__(self, ...):
        # other code
        self.ALA = ALA(self.id, self.loss, self.train_data, self.batch_size, 
                    self.rand_percent, self.layer_idx, self.eta, self.device)
        # other code

    def local_initialization(self, received_global_model, ...):
        # other code
        self.ALA.adaptive_local_aggregation(received_global_model, self.model)
        # other code

Simulation

Environments

With the installed conda, we can run this platform in a conda virtual environment called fl.

conda env create -f env_cuda_latest.yaml # for Linux

Training and Evaluation

All codes corresponding to FedALA are stored in ./system. Just run the following commands.

cd ./system
sh run_me.sh

Note: Due to the dynamics of the floating-point calculation accuracy of different GPUs, you may need to set a suitable threshold (we set it to 0.01 in our paper by default) for the ALA module to control its convergence level in the start phase. A small threshold may cause your system to get stuck in the first iteration.