Home

Awesome

[ECCV 2022] Domain Adaptive Video Segmentation via Temporal Pseudo Supervision

[Paper] [Video Demo]

Highlights

<p align="center"> <img src="./demo.gif", width="800"> </p>

Abstract

Video semantic segmentation has achieved great progress under the supervision of large amounts of labelled training data. However, domain adaptive video segmentation, which can mitigate data labelling constraint by adapting from a labelled source domain toward an unlabelled target domain, is largely neglected. We design temporal pseudo supervision (TPS), a simple and effective method that explores the idea of consistency training for learning effective representations from unlabelled target videos. Unlike traditional consistency training that builds consistency in spatial space, we explore consistency training in spatiotemporal space by enforcing model consistency across augmented video frames which helps learn from more diverse target data. Specifically, we design cross-frame pseudo labelling to provide pseudo supervision from previous video frames while learning from the augmented current video frames. The cross-frame pseudo labelling encourages the network to produce high-certainty predictions which facilitates consistency training with cross-frame augmentation effectively. Extensive experiments over multiple public datasets show that TPS is simpler to implement, much more stable to train, and achieves superior video segmentation accuracy as compared with the state-of-the-art.

Main Results

SYNTHIA-Seq => Cityscapes-Seq

Methodsroadside.buil.polelightsignvege.skyper.ridercarmIoU
Source56.326.675.625.55.715.671.058.541.717.127.938.3
DA-VSN89.431.077.426.19.120.475.474.642.916.182.449.5
PixMatch90.249.975.123.117.434.267.149.955.814.084.351.0
TPS91.253.774.924.617.939.368.159.757.220.384.553.8

VIPER => Cityscapes-Seq

Methodsroadside.buil.fencelightsignvege.terr.skyper.cartruckbusmotorbikemIoU
Source56.718.778.76.022.015.681.618.380.459.966.34.516.820.410.337.1
PixMatch79.426.184.616.628.723.085.030.183.758.675.834.245.716.612.446.7
DA-VSN86.836.783.522.930.227.783.626.780.360.079.120.347.221.211.447.8
TPS82.436.979.59.026.329.478.528.281.861.280.239.840.328.531.748.9

Note: PixMatch is reproduced with replacing the image segmentation backbone to a video segmentaion one.

Installation

  1. create conda environment
conda create -n TPS python=3.6
conda activate TPS
conda install -c menpo opencv
pip install torch==1.2.0 torchvision==0.4.0
  1. clone the ADVENT repo
git clone https://github.com/valeoai/ADVENT
pip install -e ./ADVENT
  1. clone the current repo
git clone https://github.com/xing0047/TPS.git
pip install -r ./TPS/requirements.txt
  1. resample2d dependency:
python ./TPS/tps/utils/resample2d_package/setup.py build
python ./TPS/tps/utils/resample2d_package/setup.py install

Data Preparation

  1. Cityscapes-Seq
TPS/data/Cityscapes/
TPS/data/Cityscapes/leftImg8bit_sequence/
TPS/data/Cityscapes/gtFine/
  1. VIPER
TPS/data/Viper/
TPS/data/Viper/train/img/
TPS/data/Viper/train/cls/
  1. Synthia-Seq
TPS/data/SynthiaSeq/
TPS/data/SynthiaSeq/SEQS-04-DAWN/

Pretrained Models

Download here and put them under pretrained_models.

Optical Flow Estimation

For quick preparation, please download the estimated optical flow of all datasets here.

Train and Test

  cd tps/scripts
  CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/tps_syn2city.yml
  CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/tps_viper2city.yml
  cd tps/scripts
  CUDA_VISIBLE_DEVICES=1 python test.py --cfg configs/tps_syn2city.yml
  CUDA_VISIBLE_DEVICES=1 python test.py --cfg configs/tps_viper2city.yml

Acknowledgement

This codebase is heavily borrowed from DA-VSN.

Contact

If you have any questions, feel free to contact: xing0047@e.ntu.edu.sg or dayan.guan@outlook.com.