Awesome
MGCDA: Map-Guided Curriculum Domain Adaptation
Created by Christos Sakaridis at Computer Vision Lab, ETH Zurich.
<br/><br/>
Overview
This is the source code for the MGCDA method for semantic segmentation at nighttime.
MGCDA: Paper | Dark Zurich Dataset | Challenge | Project | Conference Paper
MGCDA is presented in our IEEE TPAMI 2020 paper Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation and its original version GCMA was introduced in our ICCV 2019 paper.
For the source code for the uncertainty-aware semantic segmentation evaluation with the UIoU metric, you can consult the UIoU Dark Zurich Challenge page.
License
This software is made available for non-commercial use under a creative commons license. You can find a summary of the license here.
Contents
Requirements
For running the demo, you only need MATLAB 2016b or later.
For testing, you need:
- Linux
- NVIDIA GPU with CUDA & CuDNN
- MATLAB: version 2016b
For training, you need:
- Linux
- NVIDIA GPU with CUDA & CuDNN
- MATLAB: version 2016b
- Python 3
Demo
Run the demo MATLAB script.
This applies the geometrically guided segmentation refinement involved in MGCDA on a pair of corresponding images, i.e. a dark image and a daytime image which depict the same scene from a different viewpoint.
The results of the guided refinement, i.e. the refined segmentation of the dark image and the daytime segmentation aligned to the viewpoint of the dark image, are written in the directory output/demo/
.
Testing MGCDA
- Download the pre-trained MGCDA model and put it in the directory
output/RefineNet/Union_Cityscapes_Dark_Zurich/
. - Download the ResNet-101 backbone model and put it in the directory
source/Semantic_segmentation/refinenet/model_trained/
. - Compile the MatConvNet provided in directory
source/Semantic_segmentation/refinenet/libs/matconvnet/
to point to your CUDA and CuDNN installation. Detailed instructions for this step can be found here. - Customize the file
source/Semantic_segmentation/refinenet/main/setpath.sh
so that the environment variablesPATH
andLD_LIBRARY_PATH
point to your installation directories for CUDA and CuDNN. - Download the Dark Zurich dataset (test set - anonymized version) and unzip it in the directory
data/Dark_Zurich/
. Testing is performed on this set. - The shell script that tests the pre-trained MGCDA model is
source/scripts/MGCDA_test.sh
. You first need to make this script executable. In the command line, navigate to the directory that contains this repository and run:
find -type f -name '*\.sh' -exec chmod u+x {} \;
- Test MGCDA on Dark Zurich-test:
The generated prediction files are written under the directorycd source/scripts ./MGCDA_test.sh
output/RefineNet/Dark_Zurich_test_anon/
and include four different prediction formats (Ids, trainIds, color, raw soft predictions) to facilitate further usage. Note: the mean IoU of the pre-trained MGCDA model on this anonymized version is 42.6%, which is slightly higher than the 42.5% figure reported in the paper, because the latter figure corresponds to the original, non-anonymous version of the test set.
You can also test MGCDA on other sets, such as Nighttime Driving, BDD100K-night (a selected nighttime subset of the segmentation set of BDD100K), and the validation set of Dark Zurich, simply by:
- downloading the respective set, similarly to above
- changing line 9 of the inner test script
source/Semantic_segmentation/Experiments/Union_Cityscapes_Dark_Zurich/scripts/DarkCityscapes_DarkZurichNight_CycleGANfc-DarkZurich_twilight_labels_refinenet_init_geoRefDynDay-w_1-test_DarkZurich_testAnon.sh
to the name of the respective set, e.g. toNighttime_Driving
. Consult the MATLAB testing function for a list of supported test sets.
To test MGCDA on your own custom set, you need to:
- implement a MATLAB function for your set similar to the function
source/Semantic_segmentation/refinenet/main/my_gen_ds_info_Dark_Zurich_test_anon.m
that corresponds to Dark Zurich-test - augment the MATLAB testing function with a handle to the above function.
Training MGCDA
- Download the pre-trained RefineNet-res101-Cityscapes model and put it in the directory
output/RefineNet/Cityscapes/
. - Download the ResNet-101 backbone model and put it in the directory
source/Semantic_segmentation/refinenet/model_trained/
. - Compile the MatConvNet provided in directory
source/Semantic_segmentation/refinenet/libs/matconvnet/
to point to your CUDA and CuDNN installation. Detailed instructions for this step can be found here. - Customize the file
source/Semantic_segmentation/refinenet/main/setpath.sh
so that the environment variablesPATH
andLD_LIBRARY_PATH
point to your installation directories for CUDA and CuDNN. - Configure the CycleGAN Python implementation. The recommended way is via conda. Install a new conda environment using the provided YAML file:
cd source/Style_transfer/pytorch-CycleGAN-and-pix2pix conda env create -f environment.yml
- Download the Dark Zurich dataset (training set - anonymized version) and unzip it in the directory
data/Dark_Zurich/
. - Download the Cityscapes dataset and unzip it in the directory
data/Cityscapes/
. You need the packagesleftImg8bit_trainvaltest.zip
andgtFine_trainvaltest.zip
. - Download the precomputed depth map predictions of Monodepth2 on Dark Zurich-day and unzip them in the directory
output/Depth_estimation/
. - Download the precomputed SURFs for Dark Zurich and unzip them in the directory
output/Feature_extraction_and_matching/
. - The shell script that runs the full training pipeline of MGCDA is
source/scripts/MGCDA_train.sh
. Make this script executable. - Train MGCDA on Cityscapes and Dark Zurich:
The trained MGCDA model is stored in the directorycd source/scripts ./MGCDA_train.sh
output/RefineNet/Union_Cityscapes_Dark_Zurich/
with the namerefinenet_res101_cityscapes_DarkCityscapes_DarkZurichTwilight_CycleGANfc_DarkZurichNight_CycleGANfc_DarkZurich_day_labels_original_w_1_twilight_labels_adaptedPrevGeoRefDyn_w_1_epoch_10.mat
. - You can use our pretrained CycleGAN models for translation to twilight and nighttime if you want to avoid training CycleGAN.
Acknowledgments
Our implementation includes adapted versions of two external repositories:
- RefineNet: https://github.com/guosheng/refinenet The associated license is here.
- CycleGAN: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix The associated license is here.
Citation
If you use our code in your work, please cite our publications as
@article{SDV20,
author = {Sakaridis, Christos and Dai, Dengxin and Van Gool, Luc},
title = {Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2020},
doi = {10.1109/TPAMI.2020.3045882}
}
and
@inproceedings{SDV19,
author = {Sakaridis, Christos and Dai, Dengxin and Van Gool, Luc},
title = {Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
year = {2019}
}
Contact
Christos Sakaridis
csakarid[at]vision.ee.ethz.ch
https://www.trace.ethz.ch/publications/2019/GCMA_UIoU