Awesome
Debiasing text-to-image models with Latent Directions
Welcome to the repository of the paper Latent Directions: A Simple Pathway to Bias Mitigation in Generative AI. This paper has been accepted to CVPR 2024 in ReGenAI: First Workshop on Responsible Generative AI.
The research presents a novel method for bias mitigation in Diffusion models which achieves diverse and inclusive images without any hard prompting or embedding alteration, solely relaying in the initial noise provided to the model.
<p align="center"> <img src="https://github.com/blclo/latent-debiasing-directions/blob/main/images/principal.png" alt="Summary of results" width="400"> </p>Code
You can find two main folders: understanding
and mitigation
.
- Under the
understanding
directory are all the scripts related to the understanding pipeline. - The
mitigation
directory contains all the scripts that can help you find the latent directions and apply them after their generation.
Understanding biases
We provide a tool for developers for bias understanding which targets two key points:
- We aim to help comprehending the connections between embeddings and generations, analysing the embedding relationship between attributes and concepts in text and vision encoders. This can reveal innate biases and make us conscious of the existing problems.
- The tool detects the social characteristics and objects presented in the image. This helps us understanding the impact of the biases, looking at the impacted generations and the statistics.
These two points of reference can help us verify the theory: the higher the cosine similarity between concept and attribute, the more likely is to see these attributes present in the generated images of the concept. If we find a high cosine similarity between a specific concept and attribute, but this is not so clear when looking at the statistics of their detection in the generated images, then we would have a misalignment and perhaps something to investigate!
In our paper we can see an example of how despite using the prompt ”A wealthy African man and his house”, the highest embedding similarities belong to attributes such as poverty-stricken or underprivileged.
<p align="center"> <img src="https://github.com/blclo/latent-debiasing-directions/blob/main/images/understandingTool.jpeg" alt="Understanding tool" width="400"> </p>Mitigating biases through latent directions
The mitigation strategy consists of two main parts. First, we have to obtain the latent direction. Secondly, we need to apply it!
<p align="center"> <img src="https://github.com/blclo/latent-debiasing-directions/blob/main/images/model_training.jpeg" alt="Summary of training and mitigation" width="800"> </p>- Finding the latent direction 🕵️♀
-
We need to generate two sample datasets and save their corresponding latents. The code found in
generate_dataset_save_latents.py
, helps you build this dataset and obtain a json dictionary with the latents files and class label. You need to run this script twice, once per each class/dataset.- What sample datasets should I choose? The datasets should represent the transition you aim to achieve through your latent direction. For instance, if you aim to debias light-skin color images, to generate more diversity with dark-skin color, you should choose to generate a dataset containing light-skin individuals and another one containing dark-skin ones. These way we can ensure the latents belong to the two different groups and we can train the latent direction to differentiate between them :)
-
Obtain the latent direction. The code found in
obtain_latent_direction.py
iterates through the merged* json dictionary, separating the data from the two classes. A SVM classifier separates linearly between the classes and returns 10 diffent latent directions per class. Each of those latent directions, differ in the latent step used for their training, from step 0 (Gaussian Noise) to step 45 (almost the complete denoised image).
-
*merged: the two dictionaries generates by generate_dataset_save_latents.py
should be merged manually before using it in obtain_latent_direction.py
.
-
Apply the latent direction with a chosen weight 🚀
The paper shows how the impact of the weight selection is smaller than the impact of the latent step used in the training. As a result, we recommend choosing the latent direction obtained through step 10, which corresponds to
idx_latent
of 2. Once we have its file path, we can use the code found ingenerate_from_latent_direction.py
, modify the weight and the number of images to generate and obtain debiased results!
Citation
If this work is insightful and relevant for your research, please cite it using:
@InProceedings{lopez2024latent,
author = {C. Lopez Olmos, A. Neophytou, S. Sengupta, and D. P. Papadopoulos},
title = {Latent Directions: A Simple Pathway to Bias Mitigation in Generative AI},
booktitle = {Proceedings of the CVPR Conference at ReGenAI: First Workshop on Responsible Generative AI},
month = {February},
year = {2024}
}
For any questions, please do not hesitate to reach out to Carolina Lopez at clopezolmos@microsoft.com