Awesome
MUTR: A Unified Temporal Transformer for Multi-Modal Video Object Segmentation
Official implementation of 'Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation'.
The paper has been accepted by AAAI 2024 š„.
<!-- <div align="center"> <h1> <b> Referred by Multi-Modality: A Unified Temporal <br> Transformer for Video Object Segmentation </b> </h1> </div> -->Introduction
We propose MUTR, a Multi-modal Unified Temporal transformer for Referring video object segmentation. With a unified framework for the first time, MUTR adopts a DETR-style transformer and is capable of segmenting video objects designated by either text or audio reference. Specifically, we introduce two strategies to fully explore the temporal relations between videos and multi-modal signals, which are low-level temporal aggregation (MTA) and high-level temporal interaction (MTI). On Ref-YouTube-VOS and AVSBench with respective text and audio references, MUTR achieves +4.2% and +4.2% J&F improvements to state-of-the-art methods, demonstrating our significance for unified multi-modal VOS.
<p align="center"><img src="docs/network.png" width="800"/></p>Update
- TODO: Release the code and checkpoints on AV-VOS with audio reference š.
- We release the code and checkpoints of MUTR on RVOS with language reference š„.
Requirements
We test the codes in the following environments, other versions may also be compatible:
- CUDA 11.1
- Python 3.7
- Pytorch 1.8.1
Installation
Please refer to install.md for installation.
Data Preparation
Please refer to data.md for data preparation.
After the organization, we expect the directory struture to be the following:
MUTR/
āāā data/
ā āāā ref-youtube-vos/
ā āāā ref-davis/
āāā davis2017/
āāā datasets/
āāā models/
āāā scipts/
āāā tools/
āāā util/
āāā train.py
āāā engine.py
āāā inference_ytvos.py
āāā inference_davis.py
āāā opts.py
...
Get Started
Please see Ref-YouTube-VOS and Ref-DAVIS 2017 for details.
Model Zoo and Results
Note:
--backbone
denotes the different backbones (see here).
--backbone_pretrained
denotes the path of the backbone's pretrained weight (see here).
Ref-YouTube-VOS
To evaluate the results, please upload the zip file to the competition server.
Backbone | J&F | J | F | Model | Submission |
---|---|---|---|---|---|
ResNet-50 | 61.9 | 60.4 | 63.4 | model | link |
ResNet-101 | 63.6 | 61.8 | 65.4 | model | link |
Swin-L | 68.4 | 66.4 | 70.4 | model | link |
Video-Swin-T | 64.0 | 62.2 | 65.8 | model | link |
Video-Swin-S | 65.1 | 63.0 | 67.1 | model | link |
Video-Swin-B | 67.5 | 65.4 | 69.6 | model | link |
ConvNext-L | 66.7 | 64.8 | 68.7 | model | link |
ConvMAE-B | 66.9 | 64.7 | 69.1 | model | link |
Ref-DAVIS17
As described in the paper, we report the results using the model trained on Ref-Youtube-VOS without finetune.
Backbone | J&F | J | F | Model |
---|---|---|---|---|
ResNet-50 | 65.3 | 62.4 | 68.2 | model |
ResNet-101 | 65.3 | 61.9 | 68.6 | model |
Swin-L | 68.0 | 64.8 | 71.3 | model |
Video-Swin-T | 66.5 | 63.0 | 70.0 | model |
Video-Swin-S | 66.1 | 62.6 | 69.8 | model |
Video-Swin-B | 66.4 | 62.8 | 70.0 | model |
ConvNext-L | 69.0 | 65.6 | 72.4 | model |
ConvMAE-B | 69.2 | 65.6 | 72.8 | model |
Acknowledgement
This repo is based on ReferFormer. We also refer to the repositories Deformable DETR and MTTR. Thanks for their wonderful works.
Citation
@inproceedings{yan2024referred,
title={Referred by multi-modality: A unified temporal transformer for video object segmentation},
author={Yan, Shilin and Zhang, Renrui and Guo, Ziyu and Chen, Wenchao and Zhang, Wei and Li, Hongyang and Qiao, Yu and Dong, Hao and He, Zhongjiang and Gao, Peng},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={6},
pages={6449--6457},
year={2024}
}
Contact
If you have any question about this project, please feel free to contact tattoo.ysl@gmail.com.