Home

Awesome

<br /> <p align="center"> <h1 align="center">Tube-Link: A Flexible Cross Tube Framework for Universal Video Segmentation</h1> <p align="center"> ICCV, 2023 <br /> <a href="https://lxtgh.github.io/"><strong>Xiangtai Li</strong></a> · <a href="https://yuanhaobo.me/"><strong>Haobo Yuan</strong></a> · <a href="https://zhangwenwei.cn/"><strong>Wenwei Zhang</strong></a> · <a href="https://sites.google.com/view/guangliangcheng"><strong>Guangliang Cheng</strong></a> <br /> <a href="https://oceanpang.github.io/"><strong>Jiangmiao Pang</strong></a> . <a href="https://www.mmlab-ntu.com/person/ccloy/"><strong>Chen Change Loy*</strong></a> </p> <p align="center"> <a href='https://arxiv.org/pdf/2303.12782'> <img src='https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='arXiv PDF'> </a> <a href='' style='padding-left: 0.5rem;'> <img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'> </a> </p> <br />

Universal Video Segmentation Model For VSS, VPS, and VIS

avatar

News !!

[Paper] [CODE]

Features

$\color{#2F6EBA}{Universal\ Video\ Segmentation\ Model}$

$\color{#2F6EBA}{Explore\ the\ Cross-Tube\ Relation}$

$\color{#2F6EBA}{Strong\ Performance}$

Dataset

See Dataset.md

Install

See Install.md

Training, Evaluation, and Models

See Train.md

Visualization Results

[VIS] Youtube-VIS 2019

<details open> <summary>Demo</summary>

vis_demo_1

vis_demo_2

</details>

[VPS] VIP-Seg

<details open> <summary>Demo</summary>

vps_demo_1

vps_demo_2

</details>

[VSS] VSPW

<details open> <summary>Demo</summary>

vss_demo

</details>

[VPS] KITTI-STEP

<details open> <summary>Demo</summary>

vps_demo_3

</details>

Citation

If you think both Tube-Link and its codebase are useful for your research, please consider referring Tube-Link:


@inproceedings{li2023tube,
  title={Tube-link: A flexible cross tube baseline for universal video segmentation},
  author={Li, Xiangtai and Yuan, Haobo and Zhang, Wenwei and Cheng, Guangliang and Pang, Jiangmiao and Loy, Chen Change},
  booktitle={ICCV},
  year={2023}
}

@inproceedings{li2022videoknet,
  title={Video k-net: A simple, strong, and unified baseline for video segmentation},
  author={Li, Xiangtai and Zhang, Wenwei and Pang, Jiangmiao and Chen, Kai and Cheng, Guangliang and Tong, Yunhai and Loy, Chen Change},
  booktitle={CVPR},
  year={2022}
}

License

MIT