Home

Awesome

An Investigation of Critical Issues in Bias Mitigation Techniques

Our paper examines if the state-of-the-art bias mitigation methods are able to perform well on more realistic settings: with multiple sources of biases, hidden biases and without access to test distributions. This repository has implementations/re-implementations for seven popular techniques.

Setup

Install Dependencies

conda create -n bias_mitigator python=3.7

source activate bias_mitigator

conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch

conda install tqdm opencv pandas

Configure Path

Datasets

Biased MNIST v1/v2

Since the publication, we created BiasedMNISTv2, a more challenging version of the dataset. Version 2 includes increased image sizes, spuriously correlated digit scales, distracting letters instead of simplistic geometric shapes, and updated background textures.

We encourage the community to use the BiasedMNIST v2.

You can download the BiasedMNISTv1 (WACV 2021) from here.

Both BiasedMNISTv1 and v2 are released under Creative Commons Attribution 4.0 International (CC BY 4.0) license.

CelebA
GQA-OOD

Run the methods

We have provided a separate bash file for running each method on each dataset in the scripts directory. Here is a sample script:

source activate bias_mitigator

TRAINER_NAME='BaseTrainer'
lr=1e-3
wd=0
python main.py \
--expt_type celebA_experiments \
--trainer_name ${TRAINER_NAME} \
--lr ${lr} \
--weight_decay ${wd} \
--expt_name ${TRAINER_NAME} \
--root_dir ${ROOT}

Contribute!

Highlights from the paper:

  1. Overall, methods fail when datasets contain multiple sources of bias, even if they excel on smaller settings with one or two sources of bias (e.g., CelebA).

  2. Methods can exploit both implicit (hidden) and explicit biases.

  3. Methods cannot handle multiple sources of bias even when they are explicitly labeled.

  4. Most methods show high sensitivity to the tuning distribution especially for minority groups

Citation

@article{shrestha2021investigation,
  title={An investigation of critical issues in bias mitigation techniques},
  author={Shrestha, Robik and Kafle, Kushal and Kanan, Christopher},
  journal={Workshop on Applications of Computer Vision},
  year={2021}
}

This work was supported in part by the DARPA/SRI Lifelong Learning Machines program[HR0011-18-C-0051], AFOSR grant [FA9550-18-1-0121], and NSF award #1909696.