Home

Awesome

Towards Better Stability and Adaptability: Improve Online Self-Training for Model Adaptation in Semantic Segmentation(CVPR-2023)

This is a pytorch implementation of DT-ST. (CVPR-2023-highlight paper-top 2.5%)

Prerequisites

Step-by-step installation

conda create --name dtst -y python=3.6
conda activate dtst

# this installs the right pip and dependencies for the fresh python
conda install -y ipython pip

pip install ninja yacs cython matplotlib tqdm opencv-python imageio mmcv

# follow PyTorch installation in https://pytorch.org/get-started/locally/
# we give the instructions for CUDA 9.2
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=9.2 -c pytorch

Getting started

Data:

The data folder should be structured as follows:

├── datasets/
│   ├── cityscapes/     
|   |   ├── gtFine/
|   |   ├── leftImg8bit/		
...

Pretrain models:

Then, put these *.pth into the pretrain folder.

Train

G2C model adaptation

python train_TCR_DTU.py -cfg configs/deeplabv2_r101_dtst.yaml OUTPUT_DIR results/dtst/ resume pretrain/G2C_model_iter020000.pth

S2C model adaptation

python train_TCR_DTU.py -cfg configs/deeplabv2_r101_dtst_synthia.yaml OUTPUT_DIR results/synthia_dtst/ resume ./pretrain/S2C_model_iter020000.pth

Evaluate

python test.py -cfg configs/deeplabv2_r101_dtst.yaml resume results/dtst_g2c/model_iter020000.pth

Our trained model and the training logs are available via DTST-training-logs-and weights.

Visualization results

Test the trained model with color jitter. Visualization

Acknowledge

Some codes are adapted from FADA, SAC and DSU. We thank them for their excellent projects.

Citation

If you find this code useful please consider citing

@inproceedings{zhao2023towards,
  title={Towards Better Stability and Adaptability: Improve Online Self-Training for Model Adaptation in Semantic Segmentation},
  author={Zhao, Dong and Wang, Shuang and Zang, Qi and Quan, Dou and Ye, Xiutiao and Jiao, Licheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11733--11743},
  year={2023}
}