Home

Awesome

ECLIPSE (CVPR 2024)

ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning <br /> Beomyoung Kim<sup>1,2</sup>, Joonsang Yu<sup>1</sup>, Sung Ju Hwang<sup>2</sup><br>

<sup>1</sup> <sub>NAVER Cloud, ImageVision</sub><br /> <sup>2</sup> <sub>KAIST</sub><br />

Paper

</div>

demo image

Introduction

Panoptic segmentation, combining semantic and instance segmentation, stands as a cutting-edge computer vision task. Despite recent progress with deep learning models, the dynamic nature of real-world applications necessitates continual learning, where models adapt to new classes (plasticity) over time without forgetting old ones (catastrophic forgetting). Current continual segmentation methods often rely on distillation strategies like knowledge distillation and pseudo-labeling, which are effective but result in increased training complexity and computational overhead. In this paper, we introduce a novel and efficient method for continual panoptic segmentation based on Visual Prompt Tuning, dubbed ECLIPSE. Our approach involves freezing the base model parameters and fine-tuning only a small set of prompt embeddings, addressing both catastrophic forgetting and plasticity and significantly reducing the trainable parameters. To mitigate inherent challenges such as error propagation and semantic drift in continual segmentation, we propose logit manipulation to effectively leverage common knowledge across the classes. Experiments on ADE20K continual panoptic segmentation benchmark demonstrate the superiority of ECLIPSE, notably its robustness against catastrophic forgetting and its reasonable plasticity, achieving a new state-of-the-art.

Updates

2024-04-29 First Commit, We release the official implementation of ECLIPSE.

Installation

Our implementation is based on CoMFormer and Mask2Former.

Please check the installation instructions and dataset preparation.

You can see our core implementation from

Quick Start

  1. Step t=0: Training the model for base classes (you can skip this process if you use pre-trained weights.)
  2. Step t>1: Training the model for novel classes with ECLIPSE
ScenarioScriptStep-0 WeightFinal Weight
ADE20K-Panoptic 100-5bash script/ade_ps/100_5.shstep0step10
ADE20K-Panoptic 100-10bash script/ade_ps/100_10.shstep0step5
ADE20K-Panoptic 100-50bash script/ade_ps/100_50.shstep0step1
ADE20K-Panoptic 50-10bash script/ade_ps/50_10.shstep0step10
ADE20K-Panoptic 50-20bash script/ade_ps/50_20.shstep0step5
ADE20K-Panoptic 50-50bash script/ade_ps/50_50.shstep0step2
ADE20K-Semantic 100-5bash script/ade_ss/100_5.shstep0step10
ADE20K-Semantic 100-10bash script/ade_ss/100_10.shstep0step5
ADE20K-Semantic 100-50reproduce errorstep0step1
COCO-Panoptic 83-5bash script/coco_ps/83_5.shstep0step10
COCO-Panoptic 83-10bash script/coco_ps/83_10.shstep0step5
<div align="center"> <img src="https://github.com/clovaai/ECLIPSE/releases/download/assets/adps.png" width="100%"/> <br /> <br /> <img src="https://github.com/clovaai/ECLIPSE/releases/download/assets/cocops.png" width="100%"/> <br /> <br /> <img src="https://github.com/clovaai/ECLIPSE/releases/download/assets/adss.png" width="100%"/> </div>

How to Cite

@InProceedings{Kim_2024_CVPR,
    author    = {Kim, Beomyoung and Yu, Joonsang and Hwang, Sung Ju},
    title     = {ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {3346-3356}
}

License

ECLIPSE
Copyright (c) 2024-present NAVER Cloud Corp.
CC BY-NC 4.0 (https://creativecommons.org/licenses/by-nc/4.0/)