Home

Awesome

RCIL

[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation<br> Chang-Bin Zhang<sup>1</sup>, Jia-Wen Xiao<sup>1</sup>, Xialei Liu<sup>1</sup>, Ying-Cong Chen<sup>2</sup>, Ming-Ming Cheng<sup>1</sup><br> <sup>1</sup> <sub>College of Computer Science, Nankai University</sub><br /> <sup>2</sup> <sub>The Hong Kong University of Science and Technology</sub><br />

Conference Paper

PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC PWC

News

Method

<img width="1230" alt="ζˆͺ屏2022-04-09 上午1 02 44" src="https://user-images.githubusercontent.com/35215543/162488465-73c56e73-8d5b-4406-941f-85497673c419.png">

Update

Benchmark and Setting

There are two commonly used settings, disjoint and overlapped. In the disjoint setting, assuming we know all classes in the future, the images in the current training step do not contain any classes in the future. The overlapped setting allows potential classes in the future to appear in the current training images. We call each training on the newly added dataset as a step. Formally, X-Y denotes the continual setting in our experiments, where X denotes the number of classes that we need to train in the first step. In each subsequent learning step, the newly added dataset contains Y classes.

There are some settings reported in our paper. You can also try it on other any custom settings.

Performance

MethodPub.15-5 disjoint15-5 overlapped15-1 disjoint15-1 overlapped10-1 disjoint10-1 overlapped5-3 overlapped5-3 disjoint
LWFTPAMI 201754.955.05.35.54.34.8
ILTICCVW 201958.961.37.99.25.45.5
MiBCVPR 202065.970.039.932.26.920.1
SDRCVPR 202167.370.148.739.514.325.1
PLOPCVPR 202164.370.146.554.68.430.5
OursCVPR 202267.372.454.759.418.234.342.88
MethodPub.100-50 overlapped100-10 overlapped50-50 overlapped100-5 overlapped
ILTICCVW 201917.01.19.70.5
MiBCVPR 202032.829.229.325.9
PLOPCVPR 202132.931.630.428.7
OursCVPR 202234.532.132.529.6
MethodPub.11-511-11-1
LWFTPAMI 201759.757.333.0
LWF-MCCVPR 201758.757.031.4
ILTICCVW 201959.157.830.1
MiBCVPR 202061.560.042.2
PLOPCVPR 202163.562.145.2
OursCVPR 202264.363.048.9

Dataset Prepare

Environment

  1. conda install --yes --file requirements.txt (Higher version pytorch should be suitable.)
  2. Install inplace-abn

Training

  1. Dowload pretrained model from ResNet-101_iabn to pretrained/
  2. We have prepared some training scripts in scripts/. You can train the model by
sh scripts/voc/rcil_10-1-overlap.sh

Inference

You can simply modify the bash file by adding --test, like

CUDA_VISIBLE_DEVICES=${GPU} python3 -m torch.distributed.launch --master_port ${PORT} --nproc_per_node=${NB_GPU} run.py --data xxx ... --test

Reference

If this work is useful for you, please cite us by:

@inproceedings{zhang2022representation,
  title={Representation Compensation Networks for Continual Semantic Segmentation},
  author={Zhang, Chang-Bin and Xiao, Jia-Wen and Liu, Xialei and Chen, Ying-Cong and Cheng, Ming-Ming},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7053--7064},
  year={2022}
}

Contact

If you have any questions about this work, please feel easy to contact us (zhangchbin ^ mail.nankai.edu.cn or zhangchbin ^ gmail.com).

Thanks

This code is heavily borrowed from [MiB] and [PLOP].

Awesome Continual Segmentation

There is a collection of AWESOME things about continual semantic segmentation, including papers, code, demos, etc. Feel free to pull request and star.

2022

2021

2020

2019