Home

Awesome

KDD 2023 Tutorial - Addressing Bias and Fairness in Machine Learning: A Practical Guide and Hands-on Tutorial

Presenters

Earlier versions:

Why this tutorial?

Tackling issues of bias and fairness when building and deploying machine learning and data science systems has received increased attention from the research community in recent years, yet most of the research has focused on theoretical aspects with a very limited set of application areas and data sets. Today, we have a lack of:

  1. Practical training materials
  2. Methodologies to follow when building ML/data science systems that are fair and equitable for people that are affected by them
  3. Tools for researchers and developers working on real-world, ML-based decision-making system to deal with issues of bias and fairness.

Today, treating bias and fairness as primary metrics of interest, and building, selecting, and validating models using these metrics is not standard practice for data scientists. This tutorial is a step towards changing that.

What will we cover?

In this hands-on tutorial we will bridge the gap between research and practice, by exploring fairness at the systems and outcomes level, from metrics and definitions to practical case studies, including bias audits (using the Aequitas toolkit) and the impact of various bias reduction strategies. By the end of this hands-on tutorial, the audience will be familiar with bias audit and reduction frameworks and tools that will help them make informed design choices guided by the contexts in which their system will be deployed and used.

Pre-Requisites

Schedule and Structure

Google Slides

Interactive versions hosted on colab

Static jupyter notebooks

  1. Overall fairness and equity when building Data Science/ML systems

  2. From societal goals to fairness goals to ML fairness metrics

  3. Audit bias and fairness of an ML-based decision-making system

  4. Explore bias reduction strategies

  5. Wrap-Up

    • Things to remember
    • Additional tools and resources

Resources

References

Bias Reduction Papers

Post Modeling Correction Papers

Case Studies

Acknowledgements