Home

Awesome

VITA: Video Instance Segmentation via Object Token Association (NeurIPS 2022)

PWC
PWC
PWC

Miran Heo<sup>*, Sukjun Hwang<sup>*, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim (*equal contribution)

[arXiv] [BibTeX]

<div align="center"> <img src="vita_teaser.png" width="100%" height="100%"/> </div><br/>

Updates

Installation

See installation instructions.

Getting Started

We provide a script train_net_vita.py, that is made to train all the configs provided in VITA.

To train a model with "train_net_vita.py" on VIS, first setup the corresponding datasets following Preparing Datasets for VITA.

Then run with COCO pretrained weights in the Model Zoo:

python train_net_vita.py --num-gpus 8 \
  --config-file configs/youtubevis_2019/vita_R50_bs8.yaml \
  MODEL.WEIGHTS vita_r50_coco.pth

To evaluate a model's performance, use

python train_net_vita.py \
  --config-file configs/youtubevis_2019/vita_R50_bs8.yaml \
  --eval-only MODEL.WEIGHTS /path/to/checkpoint_file

<a name="ModelZoo"></a>Model Zoo

Pretrained weights on COCO

NameR-50R-101Swin-L
VITAmodelmodelmodel

YouTubeVIS-2019

NameBackboneAPAP50AP75AR1AR10Download
VITAR-5049.872.654.549.461.0model
VITASwin-L63.086.967.956.368.1model

YouTubeVIS-2021

NameBackboneAPAP50AP75AR1AR10Download
VITAR-5045.767.449.540.953.6model
VITASwin-L57.580.661.047.762.6model

OVIS

NameBackboneAPAP50AP75AR1AR10Download
VITAR-5019.641.217.411.726.0model
VITASwin-L27.751.924.914.933.0model

License

The majority of VITA is licensed under a Apache-2.0 License. However portions of the project are available under separate license terms: Detectron2(Apache-2.0 License), IFC(Apache-2.0 License), Mask2Former(MIT License), and Deformable-DETR(Apache-2.0 License).

<a name="CitingVITA"></a>Citing VITA

If you use VITA in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.

@inproceedings{VITA,
  title={VITA: Video Instance Segmentation via Object Token Association},
  author={Heo, Miran and Hwang, Sukjun and Oh, Seoung Wug and Lee, Joon-Young and Kim, Seon Joo},
  booktitle={Advances in Neural Information Processing Systems},
  year={2022}
}

Acknowledgement

Our code is largely based on Detectron2, IFC, Mask2Former, and Deformable DETR. We are truly grateful for their excellent work.