Awesome
Efficient Active Domain Adaptation for Semantic Segmentation by Selecting Information-rich Superpixels (ECCV 2024 Oral)
This repository is released that can reproduce the main results (our proposed superpixel-level method for active domain adaptation in semantic segmentation, ADA_superpixel) of the experiment on VIPER to Cityscapes-Seq. Experiments on the SYNTHIA-Seq to Cityscapes-Seq can be easily implemented by slightly modifying the dataset and setting. Notably, we use DACS as UDA-merge here for simplify, while the implementation of daformer uses the mmsegmentation framework.
Install & Requirements
The code has been tested on pytorch=1.8.0 and python3.8. Please refer to requirements.txt
for detailed information.
To Install python packages
pip install -r requirements.txt
Download Pretrained Weights
For the segmentation model initialization, we start with a model pretrained on ImageNet: Download
Data preparation
You need to download the GTA5 datasets and Cityscapes datasets.
Your directory tree should be look like this:
./ADA_superpixel/data
├── cityscapes
| ├── gtFine
| | |—— train
| | └── val
| └── leftImg8bit
│ ├── train
│ └── val
├── GTA5
| ├── images
| └── labels
Superpixel Generation
We use SSN to generate superpixels for Cityscapes data and follow SSN to train SSN on source domain (GTA5 or SYNTHIA). Then we save the superpixel results at /data/XXXX-1/ssn-pytorch-patch-1/SSN_city
.
Labeling Phase 1
# get active label Y1
bash ADA_superpixel/exp/Active_label/Labeling_phase_1/script/train.sh
Training Target-base
# use active label Y1 to train Target-base
bash ADA_superpixel/exp/Training/Target_base/script/train.sh
Labeling Phase 2
# get active label Y2
bash ADA_superpixel/exp/Active_label/Labeling_phase_2/script/train.sh
Training Final model
# use active label Y2 to train Final model
bash ADA_superpixel/exp/Training/Target_base/script/train.sh