Home

Awesome

DebiasPL: Debiased Pseudo-Labeling

PWC PWC PWC

This repository contains the code (in PyTorch) for the model introduced in the following paper:

Debiased Learning from Naturally Imbalanced Pseudo-Labels<br> Xudong Wang, Zhirong Wu, Long Lian, and Stella X. Yu<br> UC Berkeley and Microsoft Research<br> CVPR 2022

Project Page | Paper | Preprint | Citation

<p align="center"> <img src="https://github.com/frank-xwang/debiased-pseudo-labeling/blob/main/DebiasPL.gif" width=70%> </p> <p align="center"> <img align="center" src="https://github.com/frank-xwang/debiased-pseudo-labeling/blob/main/result.png" width=57%> <img align="center" src="https://github.com/frank-xwang/debiased-pseudo-labeling/blob/main/ZSL-DomainShift.png" width=40%> </p>

Citation

If you find our work inspiring or use our codebase in your research, please consider giving a star ⭐ and a citation.

@inproceedings{wang2022debiased,
  title={Debiased Learning from Naturally Imbalanced Pseudo-Labels},
  author={Wang, Xudong and Wu, Zhirong and Lian, Long and Yu, Stella X},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={14647--14657},
  year={2022}
}

Updates

[06/2022] Support DebiasPL w/ CLIP for more label-efficient learning. DebiasPL (ResNet50) achieves 69.6% (71.3%) top-1 accuray on ImageNet only using 0.2% (1%) labels!

[04/2022] Initial Commit. Support zero-shot learning and semi-supervised learning on ImageNet.

Requirements

Packages

Hardware requirements

8 GPUs with >= 11G GPU RAM or 4 GPUs with >= 16G GPU RAM are recommended.

Dataset and Pre-trained Model Preparation

Please download pre-trained MoCo-EMAN model, make a new folder called pretrained and place checkpoints under it. Please download the ImageNet dataset from this link. Then, move and extract the training and validation images to labeled subfolders, using the following shell script. The indexes for semi-supervised learning experiments can be found at here. The setting with 1% labeled data is the same as FixMatch. A new list of indexes is made for the setting with 0.2% labeled data by randomly selecting 0.2% of instances from each class. Please put all CSV files in the same location as below:

dataset
└── imagenet
    ├── indexes
    │   ├── train_1p_index.csv
    │   ├── train_99p_index.csv
    |   └── ....
    ├── train
    │   ├── n01440764
    │   │   └── *.jpeg
    |   └── ....
    └── val
        ├── n01440764
        │   └── *.jpeg
        └── ....

Training and Evaluation Instructions

Semi-supervised learning on ImageNet-1k

0.2% labeled data (50 epochs):

bash scripts/0.2perc-ssl/train_DebiasPL.sh

1% labeled data (50 epochs):

bash scripts/1perc-ssl/train_DebiasPL.sh

1% labeled data (DebiasPL w/ CLIP, 100 epochs):

bash scripts/1perc-ssl/train_DebiasPL_w_CLIP.sh
MethodBackboneepochs0.2% labels1% labels
FixMatch w/ EMANRN505043.6%60.9%
DebiasPL (reported)RN505051.6%65.3%
DebiasPL (reproduced)RN505052.0% [ckpt | log]65.6% [ckpt | log]
DebiasPL w/ CLIP (reproduced)RN505069.6% [ckpt | log]-
DebiasPL w/ CLIP (reproduced)RN5010070.4% [ckpt | log]71.3% [ckpt | log]

The results reproduced by this codebase are often slightly higher than what was reported in the paper (52.0 vs 51.6; 65.6 vs. 65.3). We find it beneficial to apply cross-level instance-group discrimination loss CLD to unlabeled instances to leverage their information fully.

Zero-shot learning

Please download zero-shot predictions with a pre-trained CLIP (backbone: RN50) model and put them under imagenet/indexes/. Then run experiments on ImageNet-1k with:

bash scripts/zsl/train_DebiasPL.sh
MethodBackboneepochstop-1 acc
CLIPRN50-59.6%
CLIPViT-Base/32-63.2%
DebiasPL (reported)RN5010068.3%
DebiasPL (reproduced)RN505068.7% [ckpt | log]

How to get support from us?

If you have any general questions, feel free to email us at xdwang at eecs.berkeley.edu. If you have code or implementation-related questions, please feel free to send emails to us or open an issue in this codebase (We recommend that you open an issue in this codebase, because your questions may help others).

License

This project is licensed under the MIT License. See LICENSE for more details. The parts described below follow their original license.

Acknowledgements

Part of the code is based on EMAN, FixMatch, CLIP, CLD, and LA.