Home

Awesome

<!-- * @Descripttion: * @version: * @Author: Jinlong Li CSU PhD * @Date: 2021-10-15 17:13:40 * @LastEditors: Jinlong Li CSU PhD * @LastEditTime: 2023-05-26 15:22:34 -->

DA-Detect: Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather (WACV 2023)

paper License: MIT

<!-- [![supplement](https://img.shields.io/badge/Supplementary-Material-red)]() --> <!-- [![video](https://img.shields.io/badge/Video-Presentation-F9D371)]() -->

This is a PyTorch implementation of 'Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather', implemented by jinlong Li.

teaser

Installation

Please follow the instruction in maskrcnn-benchmark to install it, the detail as follows:


cuda_dir="maskrcnn_benchmark/csrc/cuda"
perl -i -pe 's/AT_CHECK/TORCH_CHECK/' $cuda_dir/deform_pool_cuda.cu $cuda_dir/deform_conv_cuda.cu
# You can then run the regular setup command
python3 setup.py build develop
if torch._six.PY37:
    data = pickle.load(f, encoding="latin1")
else:
    # data = pickle.load(f)
    data = pickle.load(f, encoding="latin1")

DATA

  1. Download the dataset;
    • Source domain: leftImg8bit_trainvaltest in Cityscapes Dataset
    • Target domain: leftImg8bit_trainvaltest_foggy in Foggy Cityscapes Dataset;
    • Auxiliary domain: for a domain-level metric regularization (use for Triplet loss):
      • Download the rainy mask (rainmix/Streaks_Garg06.zip);
      • Set your paths for rainy mask and Cityscape dataset in the code efficentderain-master/generate_rainy_cityscape.py, then to generate the rain cityscape dataset.
  2. Follow the example in Detectron-DA-Faster-RCNN to generate coco style annoation files (Cityscapes Dataset and Foggy Cityscapes Dataset)

Getting Started

An example of Domain Adaptive Faster R-CNN using triplet loss with ResNet adapting from Cityscapes dataset to Foggy Cityscapes dataset is provided:

  1. Follow the example in Detectron-DA-Faster-RCNN to generate coco style annoation files
  2. Modified the dataset path to the Cityscapes and Foggy Cityscapes dataset and Rainy Cityscapes dataset in paths_catalog.py, which is like:
    "foggy_cityscapes_fine_instanceonly_seg_train_cocostyle":{
        "img_dir": "your data path",
        "ann_file": "your data path" 
    }
    "foggy_cityscapes_fine_instanceonly_seg_val_cocostyle":{
        "img_dir": "your data path",
        "ann_file": "your data  path" 
    }
    
  3. Modified the yaml file in configs/da_faster_rcnn, use e2e_triplet_da_faster_rcnn_R_50_C4_cityscapes_to_foggy_cityscapes.yaml as an example:
     
     MODEL:
         OUTPUT_DIR: #your training model saving path
         OUTPUT_SAVE_NAME: # your training model saving folder name
     DA_HEADS:
         DA_ADV_GRL: # True or False, using AdvGRL or GRL
         ALIGNMENT: # True or False,  True is for aligned synthetic dataset training like: Cityscapes dataset, Foggy Cityscapes dataset, and Rainy Cityscapes dataset. False is for cross-camera training.
         TRIPLET_MARGIN: # the margin of triplet loss 
    
  4. Train the Domain Adaptive Faster R-CNN:
    python3 tools/train_net_triplet.py --config-file "configs/da_faster_rcnn/e2e_triplet_da_faster_rcnn_R_50_C4_cityscapes_to_foggy_cityscapes.yaml"
    
  5. Test the trained model:
    python3 tools/test_net.py --config-file "configs/da_faster_rcnn/e2e_triplet_da_faster_rcnn_R_50_C4_cityscapes_to_foggy_cityscapes.yaml" MODEL.WEIGHT <path_to_store_weight>/model_final.pth
    

Proposed Component

Adversarial Gradient Reversal Layer (AdvGRL)

Illustration of the adversarial mining for hard training examples by the proposed AdvGRL. In this example, we set $\lambda_0$ = 1, $\beta$ = 30. Harder training examples with lower domain classifier loss $L_c$ will have larger response. The function Adv_GRL() can be in file modeling/da_heads.py. teaser

Domain-level Metric Regularization (Based on Triplet Loss)

Previous existing domain adaptation methods mainly focus on the transfer learning from source domain $S$ to target domain $T$, which neglects the potential benefits of the third related domain can bring. To address this and thus additionally involve the feature metric constraint between different domains, we proposed an auxiliary domain for a domain-level metric regularization during the transfer learning. The function Domainlevel_Img_component()and Domainlevel_Ins_component() can be found in file modeling/da_heads.py.

Ablation Study Results

The following results are conducted with the same RestNet-50 backbone on the Cityscapes -> Foggy Cityscapes experiment.

Image-levelObject-levelAdvGRLRegularizationAP@50Download
Faster R-CNN (source only)23.41
DA Faster (Img+GRL)38.10
DA Faster (Obj+GRL)38.02
DA Faster (Img+Obj+GRL)38.43link
DA Faster (Img+Obj+AdvGRL)40.23link
DA Faster (Img+Obj+GRL+Reg)41.97link
DA Faster (Img+Obj+AdvGRL+Reg)42.34link

Citation

If you are using our proposed method for your research, please cite the following paper:

@inproceedings{li2023domain,
 title={Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather},
 author={Li, Jinlong and Xu, Runsheng and Ma, Jin and Zou, Qin and Ma, Jiaqi and Yu, Hongkai},
 booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
 pages={612--622},
 year={2023}
}

Acknowledgment

This code is modified based on the code Domain-Adaptive-Faster-RCNN-PyTorch and maskrcnn-benchmark. Thanks.