Home

Awesome

Language-Bridged Spatial-Temporal Interaction for Referring Video Object Segmentation

Language-Bridged Spatial-Temporal Interaction for Referring Video Object Segmentation, <br> Zihan Ding, Tianrui Hui, Junshi Huang, Xiaoming Wei, Jizhong Han and Si Liu <br> CVPR 2022 (arxiv 2206.03789) (Demo)

News

Abstract

Referring video object segmentation aims to predict foreground labels for objects referred by natural language expressions in videos. Previous methods either depend on 3D ConvNets or incorporate additional 2D ConvNets as encoders to extract mixed spatial-temporal features. However, these methods suffer from spatial misalignment or false distractors due to delayed and implicit spatial-temporal interaction occurring in the decoding phase. To tackle these limitations, we propose a Language-Bridged Duplex Transfer (LBDT) module which utilizes language as an intermediary bridge to accomplish explicit and adaptive spatial-temporal interaction earlier in the encoding phase. Concretely, cross-modal attention is performed among the temporal encoder, referring words and the spatial encoder to aggregate and transfer language-relevant motion and appearance information. In addition, we also propose a Bilateral Channel Activation (BCA) module in the decoding phase for further denoising and highlighting the spatial-temporal consistent features via channel-wise activation. Extensive experiments show our method achieves new state-of-the-art performances on four popular benchmarks with 6.8% and 6.9% absolute AP gains on A2D Sentences and J-HMDB Sentences respectively, while consuming around 7× less computational overhead.

Installation

Requirements

You may use pip install -r requirements.txt to install the dependencies.

Data Preparation

A2D Sentences

Download this dataset from here. Extract frames and masks from clips320H/ and Annotations/col. Text annotations can be download from here. We expect the directory structure to be the follows:

LBDT
├── datasets
│   ├── a2d
│   │   ├── images
│   │   ├── masks
|   |   ├── train.txt
|   |   ├── val.txt

Refer-Youtube-VOS

Download the dataset from the competition's website here. Text annotations can be download from here. Then, extract and organize the file. We expect the directory structure to be the following:

LBDT
├── datasets
│   ├── ytvos
│   │   ├── train
│   │   ├── val
|   |   ├── train.txt
|   |   ├── val.txt

Training

sh scripts/train_a2d.sh
sh scripts/train_ytvos.sh

Evaluation

sh scripts/eval.sh

Performance (w/o RefCOCO Pretraining)

A2D Sentences

MethodP@0.5P@0.6P@0.7P@0.8P@0.9APOverall IoUMean IoU
LBDT-171.166.157.841.612.046.170.161.2
LBDT-473.067.459.042.113.247.270.462.1

J-HMDB Senteces

MethodP@0.5P@0.6P@0.7P@0.8P@0.9APOverall IoUMean IoU
LBDT-186.475.150.711.60.140.364.665.2
LBDT-486.474.453.312.20.041.164.565.8

Refer-Youtube-VOS

MethodJFJ&F
LBDT-448.1850.5749.38

Citation

@inproceedings{ding2022language,
  title={Language-Bridged Spatial-Temporal Interaction for Referring Video Object Segmentation},
  author={Ding, Zihan and Hui, Tianrui and Huang, Junshi and Wei, Xiaoming and Han, Jizhong and Liu, Si},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4964--4973},
  year={2022}
}