Home

Awesome

Per-Clip Video Object Segmentation

by Kwanyong Park, Sanghyun Woo, Seoung Wug Oh, In So Kweon, and Joon-Young Lee

CVPR 2022

[arXiv] [PDF] [YouTube] [Poster]

Introduction

PCVOS Intro

Recently, memory-based approaches show promising results on semi-supervised video object segmentation. These methods predict object masks frame-by-frame with the help of frequently updated memory of the previous mask. Different from this per-frame inference, we investigate an alternative perspective by treating video object segmentation as clip-wise mask propagation. In this per-clip inference scheme, we update the memory with an interval and simultaneously process a set of consecutive frames (i.e. clip) between the memory updates. The scheme provides two potential benefits: accuracy gain by clip-level optimization and efficiency gain by parallel computation of multiple frames. To this end, we propose a new method tailored for the perclip inference, namely PCVOS.

Results

The following tables summarize the results of PCVOS under different clip lengths. The inference speed (FPS) was measured using a single NVIDIA RTX A6000. We also provide Youtube video for visual comparison between PCVOS and other methods.

YouTube-VOS 2019 val

ModelClip LengthFPSMeanJ SeenF SeenJ UnseenF UnseenPre-computed Results
PCVOS511.584.682.687.380.088.3Google Drive
PCVOS1024.484.182.387.079.587.5Google Drive
PCVOS1530.783.681.986.479.187.1Google Drive
PCVOS2533.883.081.485.878.686.2Google Drive

YouTube-VOS 2018 val

ModelClip LengthFPSMeanJ SeenF SeenJ UnseenF UnseenPre-computed Results
PCVOS513.484.683.088.079.687.9Google Drive
PCVOS1027.784.082.787.778.786.8Google Drive
PCVOS1533.983.882.687.478.486.6Google Drive
PCVOS2536.983.382.286.978.185.9Google Drive

Reproducing the Results

Requirements

This repository is tested in the following environment:

Data preparation

Download the validation split of YouTube-VOS 2018/2019 and place them under ./data/. You can either manually download it from the official website or use the provided download_datasets.py at STCN. The resulting folder structure should look like below:

PCVOS
├── ...
├── data
│   ├── YouTube
│   │   ├── all_frames
│   │   │   ├── valid_all_frames
│   │   ├── valid
│   ├── YouTube2018
│   │   ├── all_frames
│   │   │   ├── valid_all_frames
│   │   ├── valid
├── ...

Inference

Please download the pre-trained weights and put it in ./saves/. Then, you can run the provided inference script (inference_pretrained_pcvos.py) and it will produce the predictions under different clip lengths.

Other Results

We also provide other pre-computed results.

Citation

If you find our work or code useful for your research, please cite our paper.

@inproceedings{park2022per,
  title={Per-Clip Video Object Segmentation},
  author={Park, Kwanyong and Woo, Sanghyun and Oh, Seoung Wug and Kweon, In So and Lee, Joon-Young},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1352--1361},
  year={2022}
}

Acknowledgment

This repository is based on the following code bases. We thank all the contributors.

License

The source code is released under the GNU General Public License v3.0 Licence (please refer here for details.)