Awesome
<div align="center">
Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions
</div>This repository provides the official code for the WACV 2023 paper Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions. The code is organized using PyTorch Lightning.
π₯ [September 2, 2022] Applied on top of HRDA, Refign ranks #1 on both the ACDC leaderboardβ72.05 mIoUβand the Dark Zurich leaderboardβ63.91 mIoU. See below for training configurations.
<img src="./docs/method.png" width="900"/>Abstract
Due to the scarcity of dense pixel-level semantic annotations for images recorded in adverse visual conditions, there has been a keen interest in unsupervised domain adaptation (UDA) for the semantic segmentation of such images. UDA adapts models trained on normal conditions to the target adverse-condition domains. Meanwhile, multiple datasets with driving scenes provide corresponding images of the same scenes across multiple conditions, which can serve as a form of weak supervision for domain adaptation. We propose Refign, a generic extension to self-training-based UDA methods which leverages these cross-domain correspondences. Refign consists of two steps: (1) aligning the normal-condition image to the corresponding adverse-condition image using an uncertainty-aware dense matching network, and (2) refining the adverse prediction with the normal prediction using an adaptive label correction mechanism. We design custom modules to streamline both steps and set the new state of the art for domain-adaptive semantic segmentation on several adverse-condition benchmarks, including ACDC and Dark Zurich. The approach introduces no extra training parameters, minimal computational overheadβduring training onlyβand can be used as a drop-in extension to improve any given self-training-based UDA method.
Usage
Requirements
The code is run with Python 3.8.13. To install the packages, use:
pip install -r requirements.txt
Set Data Directory
The following environment variable must be set:
export DATA_DIR=/path/to/data/dir
Download the Data
Before running the code, download and extract the corresponding datasets to the directory $DATA_DIR
.
UDA
<details> <summary>Cityscapes</summary>Download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to $DATA_DIR/Cityscapes
.
$DATA_DIR
βββ Cityscapes
β βββ leftImg8bit
β β βββ train
β β βββ val
β βββ gtFine
β β βββ train
β β βββ val
βββ ...
Afterwards, run the preparation script:
python tools/convert_cityscapes.py $DATA_DIR/Cityscapes
</details>
<details>
<summary>ACDC</summary>
Download rgb_anon_trainvaltest.zip and gt_trainval.zip from here and extract them to $DATA_DIR/ACDC
.
$DATA_DIR
βββ ACDC
β βββ rgb_anon
β β βββ fog
β β βββ night
β β βββ rain
β β βββ snow
β βββ gt
β β βββ fog
β β βββ night
β β βββ rain
β β βββ snow
βββ ...
</details>
<details>
<summary>Dark Zurich</summary>
Download Dark_Zurich_train_anon.zip, Dark_Zurich_val_anon.zip, and Dark_Zurich_test_anon_withoutGt.zip from here and extract them to $DATA_DIR/DarkZurich
.
$DATA_DIR
βββ DarkZurich
β βββ rgb_anon
β β βββ train
β β βββ val
β β βββ val_ref
β β βββ test
β β βββ test_ref
β βββ gt
β β βββ val
βββ ...
</details>
<details>
<summary>Nighttime Driving</summary>
Download NighttimeDrivingTest.zip from here and extract it to $DATA_DIR/NighttimeDrivingTest
.
$DATA_DIR
βββ NighttimeDrivingTest
β βββ leftImg8bit
β β βββ test
β βββ gtCoarse_daytime_trainvaltest
β β βββ test
βββ ...
</details>
<details>
<summary>BDD100k-night</summary>
Download 10k Images
and Segmentation
from here and extract them to $DATA_DIR/bdd100k
.
$DATA_DIR
βββ bdd100k
β βββ images
β β βββ 10k
β βββ labels
β β βββ sem_seg
βββ ...
</details>
<details>
<summary>RobotCar for Segmentation</summary>
Download all data from here and save them to $DATA_DIR/RobotCar
. As mentioned in the corresponding README.txt, the images must be downloaded from this link.
$DATA_DIR
βββ RobotCar
β βββ images
β β βββ dawn
β β βββ dusk
β β βββ night
β β βββ night-rain
β β βββ ...
β βββ correspondence_data
β β βββ ...
β βββ segmented_images
β β βββ training
β β βββ validation
β β βββ testing
βββ ...
</details>
Alignment
<details> <summary>MegaDepth</summary>For training, we use the version provided by the D2-Net repo. Follow their instructions for downloading and preprocessing the dataset.
For testing, we use the split provided by RANSAC-Flow here.
The directories MegaDepth_Train
, MegaDepth_Train_Org
, and Val
can be removed.
All in all, the folder structure should look as follows:
$DATA_DIR
βββ MegaDepth
β βββ Undistorted_SfM
β β βββ ...
β βββ scene_info
β β βββ ...
β βββ Test
β β βββ test1600Pairs
β β | βββ ...
β β βββ test1600Pairs.csv
βββ ...
</details>
<details>
<summary>RobotCar for Matching</summary>
We use the correspondence file provided by RANSAC-Flow here. If not already downloaded for segmentation, download the images from here.
$DATA_DIR
βββ RobotCar
β βββ images
β β βββ dawn
β β βββ dusk
β β βββ night
β β βββ night-rain
β β βββ ...
β βββ test6511.csv
βββ ...
</details>
Download the Pretrained Weights
The following pretrained weights are required for Refign. Save them to ./pretrained_models/
.
-
UAWarpC checkpoint, download it here.
-
ImageNet-pretrained MiT weights (
mit_b5.pth
), download them from the SegFormer repository. -
Cityscapes-pretrained SegFormer weights (
segformer.b5.1024x1024.city.160k.pth
), download them from the SegFormer repository.
Trained Models and Results
We provide trained models of both UDA and alignment networks. To facilitate qualitative segmentation comparisons, validation set predictions of Refign can be directly downloaded. Starred models use Cityscapes pretrained weights in the backbone, the others ImageNet pretrained.
UDA
Model | Task | Test Set | Test Score | Config | Checkpoint | Predictions |
---|---|---|---|---|---|---|
Refign-DAFormer | CityscapesβACDC | ACDC test | 65.5 mIoU | config | model | ACDC val |
Refign-HRDA* | CityscapesβACDC | ACDC test | 72.1 mIoU | config | model | ACDC val |
Refign-DAFormer | CityscapesβDark Zurich | Dark Zurich test | 56.2 mIoU | config | model | Dark Zurich val |
Refign-HRDA* | CityscapesβDark Zurich | Dark Zurich test | 63.9 mIoU | config | model | Dark Zurich val |
Refign-DAFormer | CityscapesβRobotCar | RobotCar Seg. test | 60.5 mIoU | config | model | RobotCar val |
Alignment
Model | Task | Test Set | Score | Config | Checkpoint |
---|---|---|---|---|---|
UAWarpC | MegaDepth Dense Matching | RobotCar Matching test | 36.8 PCK-5 | stage1, stage2 | model |
Refign Training
Make sure to first download the necessary pretrained weights. To train Refign on ACDC (single GPU, with AMP) use the following command:
python tools/run.py fit --config configs/cityscapes_acdc/refign_hrda_star.yaml --trainer.gpus 1 --trainer.precision 16
Similar config files are available for Dark Zurich and RobotCar.
We also provide the config files for reproducing the ablation study in configs/cityscapes_acdc/ablations/
.
Refign Testing
Make sure to first download the necessary pretrained weights. To evaluate Refign e.g. on the ACDC validation set, use the following command:
python tools/run.py test --config configs/cityscapes_acdc/refign_hrda_star.yaml --ckpt_path /path/to/trained/model --trainer.gpus 1
We also provide trained models, which can be downloaded from the link above. To evaluate them, simply provide them as the argument --ckpt_path
.
To get test set scores for ACDC and DarkZurich, predictions are evaluated on the respective evaluation servers: ACDC and DarkZurich. To create and save test predictions for e.g. ACDC, use this command:
python tools/run.py predict --config configs/cityscapes_acdc/refign_hrda_star.yaml --ckpt_path /path/to/trained/model --trainer.gpus 1
UAWarpC Training
Alignment training consists of two stages. To train stage 1 use:
python tools/run.py fit --config configs/megadepth/uawarpc_stage1.yaml --trainer.gpus 1 --trainer.precision 16
Afterwards, launch stage 2, providing the path of the last checkpoint of stage 1 as an argument:
python tools/run.py fit --config configs/megadepth/uawarpc_stage2.yaml --model.init_args.pretrained /path/to/last/ckpt/of/stage1 --trainer.gpus 1 --trainer.precision 16
Training of the alignment network takes several days on a single GPU.
UAWarpC Testing
We use a separate config file for evaluation, to avoid the lengthy sampling of MegaDepth training data at that stage. But of course, the config file used for training can be used as well.
python tools/run.py test --config configs/megadepth/uawarpc_evalonly.yaml --ckpt_path /path/to/last/ckpt/of/stage2 --trainer.gpus 1
We also provide a pretrained model, which can be downloaded from the link above. To test it, simply provide it as the argument --ckpt_path
.
Local Correlation
Local correlation is implemented through this custom CUDA extension. By default the extension is built just in time. In case there are problems with this mechanism, the extension can be alternatively pre-installed in the environment (see also the README of the linked repo):
pip install spatial-correlation-sampler
How to Add Refign to your Self-Training UDA Code
Check the training_step
method in models/segmentation_model.py
. You will need to implement similar logic as is called when the use_refign
attribute is True
. In particular, you also need methods align
and refine
, located in the same file (and the full alignment network). Of course, the dataloader must also return a reference image for Refign to work.
Citation
If you find this code useful in your research, please consider citing the paper:
@inproceedings{bruggemann2022refign,
title={Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions},
author={Bruggemann, David and Sakaridis, Christos and Truong, Prune and Van Gool, Luc},
booktitle={WACV},
year={2023}
}
License
This repository is released under the MIT license. However, care should be taken to adopt appropriate licensing for third-party code in this repository. Third-party code is marked accordingly.