Home

Awesome

Cross-Domain Adaptive Teacher for Object Detection

<img src="pytorch-logo-dark.png" width="10%">License: CC BY-NC 4.0

License: CC BY-NC 4.0

This is the PyTorch implementation of our paper: <br> Cross-Domain Adaptive Teacher for Object Detection<br> Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda<br> IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 <br>

[Paper] [Project]

<p align="center"> <img src="model.png" width="85%"> </p>

Installation

Prerequisites

Our tested environment

Install python env

To install required dependencies on the virtual environment of the python (e.g., virtualenv for python3), please run the following command at the root of this code:

$ python3 -m venv /path/to/new/virtual/environment/.
$ source /path/to/new/virtual/environment/bin/activate

For example:

$ mkdir python_env
$ python3 -m venv python_env/
$ source python_env/bin/activate

Build Detectron2 from Source

Follow the INSTALL.md to install Detectron2.

Dataset download

  1. Download the datasets

  2. Organize the dataset as the Cityscapes and PASCAL VOC format following:

adaptive_teacher/
└── datasets/
    └── cityscapes/
        ├── gtFine/
            ├── train/
            └── test/
            └── val/
        ├── leftImg8bit/
            ├── train/
            └── test/
            └── val/
   └── cityscapes_foggy/
        ├── gtFine/
            ├── train/
            └── test/
            └── val/
        ├── leftImg8bit/
            ├── train/
            └── test/
            └── val/
   └── VOC2012/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── clipark/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── watercolor/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
    

Training

python train_net.py \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml\
      OUTPUT_DIR output/exp_clipart
python train_net.py\
      --num-gpus 8\
      --config configs/faster_rcnn_VGG_cross_city.yaml\
      OUTPUT_DIR output/exp_city

Resume the training

python train_net.py \
      --resume \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml MODEL.WEIGHTS <your weight>.pth

Evaluation

python train_net.py \
      --eval-only \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml \
      MODEL.WEIGHTS <your weight>.pth

Results and Model Weights

If you are urgent with the pre-trained weights, please download our interal prod_weights here at the Link. Please note that the key name in the pre-trained model is slightly different and you will need to align manually. Otherwise, please wait and we will try to release the local weights in the future.

Real to Artistic Adaptation:

BackboneSource set (labeled)Target set (unlabeled)Batch sizeAP@.5Model WeightsComment
R101VOC12Clipark1k16 labeled + 16 unlabeled40.1linkOurs w/o discriminator (dis=0)
R101VOC12Clipark1k4 labeled + 4 unlabeled47.2linklr=0.01, dis_w=0.1, default
R101VOC12Clipark1k16 labeled + 16 unlabeled49.6linkOurs in the paper, unsup_w=0.5
R101+FPNVOC12Clipark1k16 labeled + 16 unlabeled51.2link (coming soon)For future work

Weather Adaptation:

BackboneSource set (labeled)Target set (unlabeled)Batch sizeAP@.5Model WeightsComment
VGG16CityscapesFoggy Cityscapes (ALL)16 labeled + 16 unlabeled48.7link (coming soon)Ours w/o discriminator
VGG16CityscapesFoggy Cityscapes (ALL)16 labeled + 16 unlabeled50.9link (coming soon)Ours in the paper
VGG16CityscapesFoggy Cityscapes (0.02)16 labeled + 16 unlabeledin progresslink (coming soon)Ours in the paper
VGG16+FPNCityscapesFoggy Cityscapes (ALL)16 labeled + 16 unlabeled57.4link (coming soon)For future work

Citation

If you use Adaptive Teacher in your research or wish to refer to the results published in the paper, please use the following BibTeX entry.

@inproceedings{li2022cross,
    title={Cross-Domain Adaptive Teacher for Object Detection},
    author={Li, Yu-Jhe and Dai, Xiaoliang and Ma, Chih-Yao and Liu, Yen-Cheng and Chen, Kan and Wu, Bichen and He, Zijian and Kitani, Kris and Vajda, Peter},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}
} 

Also, if you use Detectron2 in your research, please use the following BibTeX entry.

@misc{wu2019detectron2,
  author =       {Yuxin Wu and Alexander Kirillov and Francisco Massa and
                  Wan-Yen Lo and Ross Girshick},
  title =        {Detectron2},
  howpublished = {\url{https://github.com/facebookresearch/detectron2}},
  year =         {2019}
}

License

This project is licensed under CC-BY-NC 4.0 License, as found in the LICENSE file.