Awesome
AQT: Adversarial Query Transformers for Domain Adaptive Object Detection
By Wei-Jie Huang, Yu-Lin Lu, Shih-Yao Lin, Yusheng Xie, and Yen-Yu Lin.
This repository contains the implementation accompanying our paper AQT: Adversarial Query Transformers for Domain Adaptive Object Detection. This work was accepted to IJCAI-ECAI 2022.
If you find it helpful for your research, please consider citing:
@inproceedings{huang2022aqt,
title = {AQT: Adversarial Query Transformers for Domain Adaptive Object Detection},
author = {Huang, Wei-Jie and Lu, Yu-Lin and Lin, Shih-Yao and Xie, Yusheng and Lin, Yen-Yu},
booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)},
year = {2022},
}
Acknowledgment
This implementation is bulit upon Deformable DETR.
Installation
Please refer to the instructions here. We leave our system information for reference.
- OS: Ubuntu 16.04
- Python: 3.6
- CUDA: 9.2
- cuDNN: 7
- PyTorch: 1.5.1
- torchvision: 0.6.1
Dataset Preparation
Please construct the datasets following these steps:
-
Download the datasets from their sources:
- Cityscapes / Foggy Cityscapes: download
gtFine_trainvaltest.zip
(labels),leftImg8bit_trainvaltest.zip
(Cityscapes images),leftImg8bit_trainvaltest_foggy.zip
(Foggy Cityscapes images) from the official website. - Sim10k: download
images
andannotations
of 10k subset from the official website. - BDD100k: download
100k Images
andLabels
from the official website.
- Cityscapes / Foggy Cityscapes: download
-
Convert the annotation files into COCO-format annotations (you can build them following the annotation conversion script).
-
Prepare a directory
datasets
as follows (or you can modify the setting here)
datasets/
├─ bdd_daytime/
│ ├─ annotations/
│ ├─ train/
│ ├─ val/
├─ cityscapes/
│ ├─ annotations/
│ ├─ leftImg8bit/
│ | ├─ train/
│ | ├─ val/
│ ├─ leftImg8bit_foggy/
│ | ├─ train/
│ | ├─ val/
├─ sim10k/
│ ├─ annotations/
│ ├─ VOC2012/
Training / Evaluation
We provide training script on single node as follows, please refer to the instructions here for other settings.
GPUS_PER_NODE={NUM_GPUS} ./tools/run_dist_launch.sh {NUM_GPUS} python main.py --config_file {CONFIG_FILE}
We use yacs for configuration. The configuration files are in ./configs
. If you want to override configuration using command line arguments, please consider the following script, which performs evaluation on a pre-trained model:
GPUS_PER_NODE={NUM_GPUS} ./tools/run_dist_launch.sh {NUM_GPUS} python main.py --config_file {CONFIG_FILE} --opts EVAL True RESUME {CHECKPOINT_FILE}