Home

Awesome

MeViS: A Large-scale Benchmark for Video Segmentation with Motion Expressions

PyTorch Python PWC

🏠[Project page]   πŸ“„[arXiv]   πŸ“„[PDF]   πŸ”₯[Dataset Download]   πŸ”₯[Evaluation Server]

This repository contains code for ICCV2023 paper:

MeViS: A Large-scale Benchmark for Video Segmentation with Motion Expressions
Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Chen Change Loy
ICCV 2023

<table border=1 frame=void> <tr> <td><img src="https://github.com/henghuiding/MeViS/blob/page/GIF/bird.gif" width="245"></td> <td><img src="https://github.com/henghuiding/MeViS/blob/page/GIF/Cat.gif" width="245"></td> <td><img src="https://github.com/henghuiding/MeViS/blob/page/GIF/coin.gif" width="245"></td> </tr> </table>

Abstract

This work strives for motion expressions guided video segmentation, which focuses on segmenting objects in video content based on a sentence describing the motion of the objects. Existing referring video object segmentation datasets downplay the importance of motion in video content for language-guided video object segmentation. To investigate the feasibility of using motion expressions to ground and segment objects in videos, we propose a large-scale dataset called MeViS, which contains numerous motion expressions to indicate target objects in complex environments. The goal of MeViS benchmark is to provide a platform that enables the development of effective language-guided video segmentation algorithms that leverage motion expressions as a primary cue for object segmentation in complex video scenes.

<div align="center"> <img src="https://github.com/henghuiding/MeViS/blob/page/static/DemoImages/teaser.png?raw=true" width="100%" height="100%"/> </div> <p style="text-align:justify; text-justify:inter-ideograph;width:100%">Figure 1. Examples of video clips from <b>M</b>otion <b>e</b>xpressions <b>Vi</b>deo <b>S</b>egmentation (<b>MeViS</b>) are provided to illustrate the dataset's nature and complexity. <font color="#FF6403">The expressions in MeViS primarily focus on motion attributes and the referred target objects that cannot be identified by examining a single frame solely</font>. For instance, the first example features three parrots with similar appearances, and the target object is identified as <i>"The bird flying away"</i>. This object can only be recognized by capturing its motion throughout the video.</p> <table border="0.6"> <div align="center"> <caption><b>TABLE 1. Scale comparison between MeViS and existing language-guided video segmentation datasets. </div> <tbody> <tr> <th align="right" bgcolor="BBBBBB">Dataset</th> <th align="center" bgcolor="BBBBBB">Pub.&Year</th> <th align="center" bgcolor="BBBBBB">Videos</th> <th align="center" bgcolor="BBBBBB">Object</th> <th align="center" bgcolor="BBBBBB">Expression</th> <th align="center" bgcolor="BBBBBB">Mask</th> <th align="center" bgcolor="BBBBBB">Obj/Video</th> <th align="center" bgcolor="BBBBBB">Obj/Expn</th> <th align="center" bgcolor="BBBBBB">Target</th> </tr> <tr> <td align="right"><a href="https://kgavrilyuk.github.io/publication/actor_action/" target="_blank">A2D&nbsp;Sentence</a></td> <td align="center">CVPR&nbsp;2018</td> <td align="center">3,782</td> <td align="center">4,825</td> <td align="center">6,656</td> <td align="center">58k</td> <td align="center">1.28</td> <td align="center">1</td> <td align="center">Actor</td> </tr> <tr> <td align="right" bgcolor="ECECEC"><a href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/video-segmentation/video-object-segmentation-with-language-referring-expressions" target="_blank">DAVIS17-RVOS</a></td> <td align="center" bgcolor="ECECEC">ACCV&nbsp;2018</td> <td align="center" bgcolor="ECECEC">90</td> <td align="center" bgcolor="ECECEC">205</td> <td align="center" bgcolor="ECECEC">205</td> <td align="center" bgcolor="ECECEC">13.5k</td> <td align="center" bgcolor="ECECEC">2.27</td> <td align="center" bgcolor="ECECEC">1</td> <td align="center" bgcolor="ECECEC">Object</td> </tr> <tr> <td align="right"><a href="https://youtube-vos.org/dataset/rvos/" target="_blank">ReferYoutubeVOS</a></td> <td align="center">ECCV&nbsp;2020</td> <td align="center">3,978</td> <td align="center">7,451</td> <td align="center">15,009</td> <td align="center">131k</td> <td align="center">1.86</td> <td align="center">1</td> <td align="center">Object</td> </tr> <tr> <td align="right" bgcolor="E5E5E5"><b>MeViS (ours)</b></td> <td align="center" bgcolor="E5E5E5"><b>ICCV&nbsp;2023</b></td> <td align="center" bgcolor="E5E5E5"><b>2,006</b></td> <td align="center" bgcolor="E5E5E5"><b>8,171</b></td> <td align="center" bgcolor="E5E5E5"><b>28,570</b></td> <td align="center" bgcolor="E5E5E5"><b>443k</b></td> <td align="center" bgcolor="E5E5E5"><b>4.28</b></td> <td align="center" bgcolor="E5E5E5"><b>1.59</b></td> <td align="center" bgcolor="E5E5E5"><b>Object(s)</b></td> </tr> </tbody> <colgroup> <col> <col> <col> <col> <col> <col> <col> <col> <col> </colgroup> </table>

MeViS Dataset Download

⬇️ Download the dataset from ️here☁️.

Dataset Split

Online Evaluation

Please submit your results of Val set on

It is strongly suggested to first evaluate your model locally using the Val<sup>u</sup> set before submitting your results of the Val to the online evaluation system.

File Structure

The dataset follows a similar structure as Refer-YouTube-VOS. Each split of the dataset consists of three parts: JPEGImages, which holds the frame images, meta_expressions.json, which provides referring expressions and metadata of videos, and mask_dict.json, which contains the ground-truth masks of objects. Ground-truth segmentation masks are saved in the format of COCO RLE, and expressions are organized similarly like Refer-Youtube-VOS.

Please note that while annotations for all frames in the Train set and the Val<sup>u</sup> set are provided, the Val set only provide frame images and referring expressions for inference.

mevis
β”œβ”€β”€ train                       // Split Train
β”‚Β Β  β”œβ”€β”€ JPEGImages
β”‚   β”‚   β”œβ”€β”€ <video #1  >
β”‚   β”‚   β”œβ”€β”€ <video #2  >
β”‚   β”‚   └── <video #...>
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ mask_dict.json
β”‚Β Β  └── meta_expressions.json
β”‚
β”œβ”€β”€ valid_u                     // Split Val^u
β”‚Β Β  β”œβ”€β”€ JPEGImages
β”‚   β”‚   └── <video ...>
β”‚   β”‚
β”‚   β”œβ”€β”€ mask_dict.json
β”‚   └── meta_expressions.json
β”‚
└── valid                       // Split Val
 Β Β  β”œβ”€β”€ JPEGImages
    β”‚   └── <video ...>
    β”‚
 Β Β  └── meta_expressions.json

Method Code Installation:

Please see INSTALL.md

Inference

1. Val<sup>u</sup> set

Obtain the output masks of Val<sup>u</sup> set:

python train_net_lmpm.py \
    --config-file configs/lmpm_SWIN_bs8.yaml \
    --num-gpus 8 --dist-url auto --eval-only \
    MODEL.WEIGHTS [path_to_weights] \
    OUTPUT_DIR [output_dir]

Obtain the J&F results on Val<sup>u</sup> set:

python tools/eval_mevis.py

2. Val set

Obtain the output masks of Val set for CodaLab online evaluation:

python train_net_lmpm.py \
    --config-file configs/lmpm_SWIN_bs8.yaml \
    --num-gpus 8 --dist-url auto --eval-only \
    MODEL.WEIGHTS [path_to_weights] \
    OUTPUT_DIR [output_dir] DATASETS.TEST '("mevis_test",)'

CodaLab Evaluation Submission Guideline

The submission format should be a .zip file containing the predicted .PNG results of the Val set (for current competition stage).

You can use following command to prepare .zip submission file

cd [output_dir]
zip -r ../xxx.zip *

A submission example named sample_submission_valid.zip can be found from the CodaLab.

sample_submission_valid.zip       // .zip file, which directly packages 140 val video folders
β”œβ”€β”€ 0ab4afe7fb46                  // video folder name
β”‚Β Β  β”œβ”€β”€ 0                         // expression_id folder name
β”‚   β”‚   β”œβ”€β”€ 00000.png             // .png files
β”‚   β”‚   β”œβ”€β”€ 00001.png
β”‚   β”‚   └── ....
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ 1
β”‚   β”‚   └── 00000.png
β”‚   β”‚   └── ....
β”‚   β”‚
β”‚   └── ....
β”‚ 
β”œβ”€β”€ 0fea0cb75a25
β”‚Β Β  β”œβ”€β”€ 0                              
β”‚   β”‚   β”œβ”€β”€ 00000.png
β”‚   β”‚   └── ....
β”‚   β”‚
β”‚   └── ....
β”‚
└── ....                      

Training

Firstly, download the backbone weights (model_final_86143f.pkl) and convert it using the script:

wget https://dl.fbaipublicfiles.com/maskformer/mask2former/coco/instance/maskformer2_swin_tiny_bs16_50ep/model_final_86143f.pkl
python tools/process_ckpt.py

Then start training:

python train_net_lmpm.py \
    --config-file configs/lmpm_SWIN_bs8.yaml \
    --num-gpus 8 --dist-url auto \
    MODEL.WEIGHTS [path_to_weights] \
    OUTPUT_DIR [path_to_weights]

Note: We also support training ReferFormer by providing ReferFormer_dataset.py

Models

Our results on Val<sup>u</sup> set and Val set of MeViS dataset.

<table border="0.6"> <tbody> <tr> <th rowspan="2" align="center" bgcolor="BBBBBB">Backbone</th> <th colspan="3" align="center" bgcolor="BBBBBB">Val<sup>u</sup></th> <th colspan="3" align="center" bgcolor="BBBBBB">Val</th> </tr> <tr> <td align="center" bgcolor="E5E5E5">J&F</td> <td align="center" bgcolor="E5E5E5">J</td> <td align="center" bgcolor="E5E5E5">F</td> <td align="center" bgcolor="E5E5E5">J&F</td> <td align="center" bgcolor="E5E5E5">J</td> <td align="center" bgcolor="E5E5E5">F</td> </tr> <tr> <td align="center" bgcolor="E5E5E5">Swin-Tiny & RoBERTa</td> <td align="center" bgcolor="E5E5E5">40.23</td> <td align="center" bgcolor="E5E5E5">36.51</td> <td align="center" bgcolor="E5E5E5">43.90</td> <td align="center" bgcolor="E5E5E5">37.21</td> <td align="center" bgcolor="E5E5E5">34.25</td> <td align="center" bgcolor="E5E5E5">40.17</td> </tr> </tbody> <colgroup> <col> <col> <col> <col> <col> <col> <col> <col> <col> </colgroup> </table>

☁️ Google Drive

Acknowledgement

This project is based on VITA, GRES, Mask2Former, and VLT. Many thanks to the authors for their great works!

BibTeX

Please consider to cite MeViS if it helps your research.

@inproceedings{MeViS,
  title={{MeViS}: A Large-scale Benchmark for Video Segmentation with Motion Expressions},
  author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Loy, Chen Change},
  booktitle={ICCV},
  year={2023}
}
@inproceedings{GRES,
  title={{GRES}: Generalized Referring Expression Segmentation},
  author={Liu, Chang and Ding, Henghui and Jiang, Xudong},
  booktitle={CVPR},
  year={2023}
}
@article{VLT,
  title={{VLT}: Vision-language transformer and query generation for referring segmentation},
  author={Ding, Henghui and Liu, Chang and Wang, Suchen and Jiang, Xudong},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023},
  publisher={IEEE}
}

A majority of videos in MeViS are from MOSE: Complex Video Object Segmentation Dataset.

@inproceedings{MOSE,
  title={{MOSE}: A New Dataset for Video Object Segmentation in Complex Scenes},
  author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Torr, Philip HS and Bai, Song},
  booktitle={ICCV},
  year={2023}
}

MeViS is licensed under a CC BY-NC-SA 4.0 License. The data of MeViS is released for non-commercial research purpose only.