Home

Awesome

Fairness Measures - Code Repository

This code repository contains implementations of measures used to quantify discrimination.

For background information on the project, see http://fairness-measures.org/

For the measures that are implemented, see http://fairness-measures.org/Pages/Measures

Expected input and output

These programs take as input a dataset in which each row represents a person.

We assume one of the attributes in the input is the target or outcome, which we assume was produced by a predictive model (if we want to evaluate algorithmic fairness), or by a person (if we want to evaluate fairness of human decisions). For instance, an outcome could be binary such as whether a person got a scholarship or not, or numeric such as the credit score associated to a person. Some outcomes can be considered positive, such as when a benefit is received, while other outcomes are negative, such as when a benefit is denied.

We further assume there are protected attributes in the input, such as gender, race, age, or disability that should ideally not affect the outcome. Some values of the protected attribute are associated to potentially discriminated groups, such as disability=yes.

The output is a measure of fairness. Fairness can be measured in many ways, for instance one of the simplest cases is statistical parity, i.e., an equal distribution of protected and non-protected attributes for the elements that received the positive outcome. There are many other ways in which fairness can be measured, please refer to the code comments.

The rest of this page explains how to install and run the code.

Data preparation

  1. Each feature should be represented in a column with the first entry as the column name.
  2. Protected attributes require the prefix protected. The outcome attribute requires the prefix target. For example, if you need to measure fairness rankings of a dataset with the columns sex and credit_score, please rename the first columns e.g. to protected_sex and target_credit_Score
  3. Protected candidates' feature value indices range from 0, to the <i> highest protected group index </i>, such that in the case of having sex as a protected feature, we use 1 for women if <i>female</i> is the protected group and 0 for men provided they are the only unprotected group. In a different use case, where age is the protected attribute in ascending order, we can use:

Please note that the available datasets are collected and/or provided as is and are not preprocessed with any protected or target columns.

Installation

Prerequisites

Installing

Running

Running the first example

python3 main.py -d
python3 main.py -f </PATH/TO/YOUR/CSV/FILE/datasetname.csv>

Running the unit tests

Contributing

Versioning

<!-- * Do we have any special versioning tools? I guess it's just git, right? -->

Authors

See also the list of contributors who participated in this project.

License/Credit

This project is licensed under the GPL License <!-- - see the [LICENSE.md](LICENSE.md) file for details -->

If you use this software or prepared datasets in your work, we ask you to cite this work:

References

Measurements that were implemented are found in the following paper(s):