Home

Awesome

Understanding and Evaluating Racial Biases in Image Captioning

Project Page | Paper | Annotations

This repo provides the code for our paper "Understanding and Evaluating Racial Biases in Image Captioning."

Citation

@inproceedings{zhao2021captionbias,
   author = {Dora Zhao and Angelina Wang and Olga Russakovsky},
   title = {Understanding and Evaluating Racial Biases in Image Captioning},
   booktitle = {International Conference on Computer Vision (ICCV)},
   year = {2021}
}

Requirements

Data annotations

To run the analyses, place the downloaded annotations in the folder annotations as well as the annotations provided with the COCO 2014 dataset.

Models

We use six different image captioning models in our paper. The models are adapted from the following Github repositories and are trained using the respective protocols detailed in their papers. In our work, we train each model on the COCO 2014 training set and evaluate on the COCO 2014 validation set.

Place the model results in the results folder.

Analyses

All analysis files are found in the code folder.

To evaluate captions, you will need to follow the setup protocol here for pycocoevalcap. Our evaluation code can be found in evaluate_captions.py.