Home

Awesome

FairGrad

Official implementation of Fair Resource Allocation in Multi-Task Learning.

Toy Example

Supervised Learning

The performance is evaluated under 3 scenarios:

Setup Environment

Following Nash-MTL and FAMO, we implement our method with the MTL library.

First, create the virtual environment:

conda create -n mtl python=3.9.7
conda activate mtl
python -m pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113

Then, install the repo:

git clone https://github.com/OptMN-Lab/fairgrad.git
cd fairgrad
python -m pip install -e .

Run Experiment

The dataset by default should be put under experiments/EXP_NAME/dataset/ folder where EXP_NAME is chosen from {celeba, cityscapes, nyuv2, quantum_chemistry}. To run the experiment:

cd experiments/EXP_NAME
sh run.sh

Cityscapes, NYU-v2, QM9. Please refer to Table 2,3,8 of our paper for more details.

CelebA. For detailed results of our method, please refer to issue1. For single-task results and other baselines including FAMO, CAGrad, etc, please refer to issue2 for more details.

Reinforcement Learning

The experiments are conducted on Meta-World benchmark. To run the experiments on MT10 and MT50 (the instructions below are partly borrowed from CAGrad):

  1. Create python3.6 virtual environment.
  2. Install the MTRL codebase.
  3. Install the Meta-World environment with commit id d9a75c451a15b0ba39d8b7a8b6d18d883b8655d8.
  4. Copy the mtrl_files folder to the mtrl folder in the installed mtrl repo, then
cd PATH_TO_MTRL/mtrl_files/ && chmod +x mv.sh && ./mv.sh
  1. Follow the run.sh to run the experiments.