Home

Awesome


<div align="center">

SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation (TPAMI 2023)

Binhui Xie, Shuang Li, Mingjia Li, Chi Harold Liu, Gao Huang, and Guoren Wang

Paper   Project  

</div>

Update on 2023/11: SePiCo is selected as :trophy: <span style="color:red">ESI Highly Cited Paper</span>!!

Update on 2023/02/15: Code release for Cityscapes → Dark Zurich.

Update on 2023/01/14: 🥳 We are happy to announce that SePiCo has been accepted in an upcoming issue of the TPAMI.

Update on 2022/09/24: All checkpoints are available.

Update on 2022/09/04: Code release.

Update on 2022/04/20: ArXiv Version of SePiCo is available.

<!-- TOC --> <!-- /TOC -->

Overview

In this work, we propose Semantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixel to promote learning of class-discriminative and class-balanced pixel embedding space across domains, eventually boosting the performance of self-training methods.

<img src="resources/uda_results.png" width=50% height=50%> <div align="right"> <b><a href="#overview">↥</a></b> </div>

Installation

This code is implemented with Python 3.8.5 and PyTorch 1.7.1 on CUDA 11.0.

To try out this project, it is recommended to set up a virtual environment first:

# create and activate the environment
conda create --name sepico -y python=3.8.5
conda activate sepico

# install the right pip and dependencies for the fresh python
conda install -y ipython pip

Then, the dependencies can be installed by:

# install required packages
pip install -r requirements.txt

# install mmcv-full, this command compiles mmcv locally and may take some time
pip install mmcv-full==1.3.7  # requires other packeges to be installed first

Alternatively, the mmcv-full package can be installed faster with official pre-built packages, for instance:

# another way to install mmcv-full, faster
pip install mmcv-full==1.3.7 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html

The environment is now fully prepared.

<div align="right"> <b><a href="#overview">↥</a></b> </div>

Datasets Preparation

Download Datasets

Setup Datasets

Symlink the required datasets:

ln -s /path/to/gta5/dataset data/gta
ln -s /path/to/cityscapes/dataset data/cityscapes
ln -s /path/to/dark_zurich/dataset data/dark_zurich

Perform preprocessing to convert label IDs to the train IDs and gather dataset statistics:

python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8

Ultimately, the data structure should look like this:

SePiCo
├── ...
├── data
│   ├── cityscapes
│   │   ├── gtFine
│   │   ├── leftImg8bit
│   ├── dark_zurich
│   │   ├── corresp
│   │   ├── gt
│   │   ├── rgb_anon
│   ├── gta
│   │   ├── images
│   │   ├── labels
├── ...
<div align="right"> <b><a href="#overview">↥</a></b> </div>

Model Zoo

We provide pretrained models of both Domain Adaptive Semantic Segmentation tasks through Google Drive and Baidu Netdisk (access code: pico).

GTAV → Cityscapes (DeepLab-v2 based)

variantsmodel namemIoUcheckpoint download
DistCLsepico_distcl_gta2city_dlv2.pth61.0Google / Baidu (acc: pico)
BankCLsepico_bankcl_gta2city_dlv2.pth59.8Google / Baidu (acc: pico)
ProtoCLsepico_protocl_gta2city_dlv2.pth58.8Google / Baidu (acc: pico)

GTAV → Cityscapes (DAFormer based)

variantsmodel namemIoUcheckpoint download
DistCLsepico_distcl_gta2city_daformer.pth70.3Google / Baidu (acc: pico)
BankCLsepico_bankcl_gta2city_daformer.pth68.7Google / Baidu (acc: pico)
ProtoCLsepico_protocl_gta2city_daformer.pth68.5Google / Baidu (acc: pico)

SYNTHIA → Cityscapes (DeepLab-v2 based)

variantsmodel namemIoUcheckpoint download
DistCLsepico_distcl_syn2city_dlv2.pth58.1Google / Baidu (acc: pico)
BankCLsepico_bankcl_syn2city_dlv2.pth57.4Google / Baidu (acc: pico)
ProtoCLsepico_protocl_syn2city_dlv2.pth56.8Google / Baidu (acc: pico)

SYNTHIA → Cityscapes (DAFormer based)

variantsmodel namemIoUcheckpoint download
DistCLsepico_distcl_syn2city_daformer.pth64.3Google / Baidu (acc: pico)
BankCLsepico_bankcl_syn2city_daformer.pth63.3Google / Baidu (acc: pico)
ProtoCLsepico_protocl_syn2city_daformer.pth62.9Google / Baidu (acc: pico)

Cityscapes → Dark Zurich (DeepLab-v2 based)

variantsmodel namemIoUcheckpoint download
DistCLsepico_distcl_city2dark_dlv2.pth45.4Google / Baidu (acc: pico)
BankCLsepico_bankcl_city2dark_dlv2.pth44.1Google / Baidu (acc: pico)
ProtoCLsepico_protocl_city2dark_dlv2.pth42.6Google / Baidu (acc: pico)

Cityscapes → Dark Zurich (DAFormer based)

variantsmodel namemIoUcheckpoint download
DistCLsepico_distcl_city2dark_daformer.pth54.2Google / Baidu (acc: pico)
BankCLsepico_distcl_city2dark_daformer.pth53.3Google / Baidu (acc: pico)
ProtoCLsepico_distcl_city2dark_daformer.pth52.7Google / Baidu (acc: pico)

Our trained model (sepico_distcl_city2dark_daformer.pth) is also tested for generalization on the Nighttime Driving and BDD100k-night test sets.

Methodmodel nameDark Zurich-testNighttime DrivingBDD100k-nightcheckpoint download
SePiCosepico_distcl_city2dark_daformer.pth54.256.940.6Google / Baidu (acc: pico)
<div align="right"> <b><a href="#overview">↥</a></b> </div>

SePiCo Evaluation

Evaluation on Cityscapes

To evaluate the pretrained models on Cityscapes, please run as follows:

python -m tools.test /path/to/config /path/to/checkpoint --eval mIoU
<details> <summary>Example</summary>

For example, if you download sepico_distcl_gta2city_dlv2.pth along with its config json file sepico_distcl_gta2city_dlv2.json into folder ./checkpoints/sepico_distcl_gta2city_dlv2/, then the evaluation script should be like:

python -m tools.test ./checkpoints/sepico_distcl_gta2city_dlv2/sepico_distcl_gta2city_dlv2.json ./checkpoints/sepico_distcl_gta2city_dlv2/sepico_distcl_gta2city_dlv2.pth --eval mIoU
</details>

Evaluation on Dark Zurich

To evaluate on Dark Zurich, please get label predictions as follows and submit them to the official test server.

Get label predictions for the test set locally:

python -m tools.test /path/to/config /path/to/checkpoint --format-only --eval-options imgfile_prefix=/path/to/labelTrainIds
<details> <summary>Example</summary>

For example, if you download sepico_distcl_city2dark_daformer.pth along with its config json file sepico_distcl_city2dark_daformer.json into folder ./checkpoints/sepico_distcl_city2dark_daformer/, then the evaluation script should be like:

python -m tools.test ./checkpoints/sepico_distcl_city2dark_daformer/sepico_distcl_city2dark_daformer.json ./checkpoints/sepico_distcl_city2dark_daformer/sepico_distcl_city2dark_daformer.pth  --format-only --eval-options imgfile_prefix=dark_test/distcl_daformer/labelTrainIds
</details>

Note that the test server only accepts submission with the following directory structure:

submit.zip
├── confidence
├── labelTrainIds
├── labelTrainIds_invalid

So we need to construct the confidence and labelTrainIds_invalid directory by hand (as they are not necessary to SePiCo evaluation).

Our practice is listed below for reference (check the example above for directory name):

cd dark_test/distcl_daformer
cp -r labelTrainIds labelTrainIds_invalid
cp -r labelTrainIds confidence
zip -q -r sepico_distcl_city2dark_daformer.zip labelTrainIds labelTrainIds_invalid confidence
# Now submit sepico_distcl_city2dark_daformer.zip to the test server for results.
<div align="right"> <b><a href="#overview">↥</a></b> </div>

SePiCo Training

To begin with, download SegFormer's official MiT-B5 weights (i.e., mit_b5.pth) pretrained on ImageNet-1k from here and put it into a new folder ./pretrained.

The training entrance is at run_experiments.py. To examine the setting for a specific task, please take a look at experiments.py for more details. Generally, the training script is given as:

python run_experiments.py --exp <exp_id>

Tasks 1~6 are run on GTAV → Cityscapes, and the mapping between <exp_id> and tasks is:

<exp_id>variantbackbonefeature
1DistCLResNet-101layer-4
2BankCLResNet-101layer-4
3ProtoCLResNet-101layer-4
4DistCLMiT-B5all-fusion
5BankCLMiT-B5all-fusion
6ProtoCLMiT-B5all-fusion

Tasks 7~8 are run on Cityscapes → Dark Zurich, and the mapping between <exp_id> and tasks is:

<exp_id>variantbackbonefeature
7DistCLResNet-101layer-4
8DistCLMiT-B5all-fusion

After training, the models can be tested following SePiCo Evaluation. Note that the training results are located in ./work_dirs. The config filename should look like: 220827_1906_dlv2_proj_r101v1c_sepico_DistCL-reg-w1.0-start-iter3000-tau100.0-l3-w1.0_rcs0.01_cpl_self_adamw_6e-05_pmT_poly10warm_1x2_40k_gta2cs_seed76_4cc9a.json, and the model file has suffix .pth.

<div align="right"> <b><a href="#overview">↥</a></b> </div>

Tips on Code Understanding

<div align="right"> <b><a href="#overview">↥</a></b> </div>

Acknowledgments

This project is based on the following open-source projects. We thank their authors for making the source code publicly available.

<div align="right"> <b><a href="#overview">↥</a></b> </div>

Citation

If you find our work helpful, please star🌟 this repo and cite📑 our paper. Thanks for your support!

@article{xie2023sepico,
  title={Sepico: Semantic-guided pixel contrast for domain adaptive semantic segmentation},
  author={Xie, Binhui and Li, Shuang and Li, Mingjia and Liu, Chi Harold and Huang, Gao and Wang, Guoren},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023},
  publisher={IEEE}
}
<div align="right"> <b><a href="#overview">↥</a></b> </div>

Contact

For help and issues associated with SePiCo, or reporting a bug, please open a [GitHub Issues], or feel free to contact binhuixie@bit.edu.cn.

Misc

↳ Stargazers, thank you for your support!

Stargazers repo roster for @BIT-DA/SePiCo

↳ Forkers, thank you for your support!

Forkers repo roster for @BIT-DA/SePiCo

<!-- ### &#8627; Star History <div align="center"> [![Star History Chart](https://api.star-history.com/svg?repos=BIT-DA/SePiCo&type=Date)](https://star-history.com/#BIT-DA/SePiCo&Date) </div> --> <div align="right"> <b><a href="#overview">↥</a></b> </div>