Home

Awesome

Unbiased Teacher for Semi-Supervised Object Detection

<img src="teaser/pytorch-logo-dark.png" width="10%"> License: MIT

This is the PyTorch implementation of our paper: <br> Unbiased Teacher for Semi-Supervised Object Detection<br> Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, Peter Vajda<br> International Conference on Learning Representations (ICLR), 2021 <br>

[arXiv] [OpenReview] [Project]

<p align="center"> <img src="teaser/figure_teaser.gif" width="85%"> </p>

Installation

Prerequisites

Install PyTorch in Conda env

# create conda env
conda create -n detectron2 python=3.6
# activate the enviorment
conda activate detectron2
# install PyTorch >=1.5 with GPU
conda install pytorch torchvision -c pytorch

Build Detectron2 from Source

Follow the INSTALL.md to install Detectron2.

Dataset download

  1. Download COCO dataset
# download images
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip

# download annotations
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
  1. Organize the dataset as following:
unbiased_teacher/
└── datasets/
    └── coco/
        ├── train2017/
        ├── val2017/
        └── annotations/
        	├── instances_train2017.json
        	└── instances_val2017.json

Training

python train_net.py \
      --num-gpus 8 \
      --config configs/coco_supervision/faster_rcnn_R_50_FPN_sup1_run1.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 16 SOLVER.IMG_PER_BATCH_UNLABEL 16
python train_net.py \
      --num-gpus 8 \
      --config configs/coco_supervision/faster_rcnn_R_50_FPN_sup2_run1.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 16 SOLVER.IMG_PER_BATCH_UNLABEL 16
python train_net.py \
      --num-gpus 8 \
      --config configs/coco_supervision/faster_rcnn_R_50_FPN_sup5_run1.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 16 SOLVER.IMG_PER_BATCH_UNLABEL 16
python train_net.py \
      --num-gpus 8 \
      --config configs/coco_supervision/faster_rcnn_R_50_FPN_sup10_run1.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 16 SOLVER.IMG_PER_BATCH_UNLABEL 16
python train_net.py \
      --num-gpus 8 \
      --config configs/voc/voc07_voc12.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 8 SOLVER.IMG_PER_BATCH_UNLABEL 8
python train_net.py \
      --num-gpus 8 \
      --config configs/voc/voc07_voc12coco20.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 8 SOLVER.IMG_PER_BATCH_UNLABEL 8

Resume the training

python train_net.py \
      --resume \
      --num-gpus 8 \
      --config configs/coco_supervision/faster_rcnn_R_50_FPN_sup10_run1.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 16 SOLVER.IMG_PER_BATCH_UNLABEL 16 MODEL.WEIGHTS <your weight>.pth

Evaluation

python train_net.py \
      --eval-only \
      --num-gpus 8 \
      --config configs/coco_supervision/faster_rcnn_R_50_FPN_sup10_run1.yaml \
       SOLVER.IMG_PER_BATCH_LABEL 16 SOLVER.IMG_PER_BATCH_UNLABEL 16 MODEL.WEIGHTS <your weight>.pth

Model Weights

For the following COCO-supervision results, we use 16 labeled images + 16 unlabeled images on 8 GPUs (single node).

Faster-RCNN:

ModelSupervisionBatch sizeAPModel Weights
R50-FPN1%16 labeled + 16 unlabeled20.16link
R50-FPN2%16 labeled + 16 unlabeled24.16link
R50-FPN5%16 labeled + 16 unlabeled27.84link
R50-FPN10%16 labeled + 16 unlabeled31.39link

For the following VOC results, we use 8 labeled images + 8 unlabeled images on 4 GPUs (single node).

VOC:

ModelLabeled setUnlabeled setBatch sizeAP50APModel Weights
R50-FPNVOC07VOC128 labeled + 8 unlabeled80.5154.48link
R50-FPNVOC07VOC12+COCO20cls8 labeled + 8 unlabeled81.7155.79link

FAQ

  1. Q: Using the lower batch size and fewer GPUs cannot achieve the results presented in the paper?
Experiment GPUsBatch size per nodeBatch sizeAP
8 GPUs/node; 4 nodes8 labeled + 8 unlabeled32 labeled + 32 unlabeled20.75
8 GPUs/node; 1 node16 labeled + 16 unlabeled16 labeled + 16 unlabeled20.16
  1. Q: How to use customized dataset other than COCO and VOC?
  1. Q: What is COCO_supervision.txt? Could I remove it if I need to use my own dataset?
  1. Why VOC results in github repo look better than VOC results presented in the paper?

Citing Unbiased Teacher

If you use Unbiased Teacher in your research or wish to refer to the results published in the paper, please use the following BibTeX entry.

@inproceedings{liu2021unbiased,
    title={Unbiased Teacher for Semi-Supervised Object Detection},
    author={Liu, Yen-Cheng and Ma, Chih-Yao and He, Zijian and Kuo, Chia-Wen and Chen, Kan and Zhang, Peizhao and Wu, Bichen and Kira, Zsolt and Vajda, Peter},
    booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
    year={2021},
}

Also, if you use Detectron2 in your research, please use the following BibTeX entry.

@misc{wu2019detectron2,
  author =       {Yuxin Wu and Alexander Kirillov and Francisco Massa and
                  Wan-Yen Lo and Ross Girshick},
  title =        {Detectron2},
  howpublished = {\url{https://github.com/facebookresearch/detectron2}},
  year =         {2019}
}

License

This project is licensed under MIT License, as found in the LICENSE file.