Home

Awesome

<p align="center"> <a href="https://layer6.ai/"><img src="https://github.com/layer6ai-labs/DropoutNet/blob/master/logs/logobox.jpg" width="180"></a> </p> <div align="center"> <h1> <b> X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval </b> </h1> <h4> <b> <a href="https://www.cs.toronto.edu/~satyag/">Satya Krishna Gorti*</a>, <a href="https://www.cs.toronto.edu/~nvouitsis/">Noël Vouitsis*</a>, <a href="https://www.linkedin.com/in/jeremy-ma/">Junwei Ma*</a>, <a href="https://www.linkedin.com/in/keyvangolestan/">Keyvan Golestan</a>, <a href="https://www.cs.toronto.edu/~mvolkovs/">Maksims Volkovs</a>, <a href="https://animesh.garg.tech/">Animesh Garg</a>, <a href="http://www.cs.toronto.edu/~guangweiyu/">Guangwei Yu</a> </b> </h4>

Paper | Project Page & Demo

</div> <a name="intro"/>

Introduction

This repository contains the official implementation of our CVPR 2022 paper. It includes both training and evaluation code.

<a name="depend"/>

Dependencies

Our model was developed and evaluated using the following package dependencies:

<a name="datasets"/>

Datasets

We trained models on the MSR-VTT, MSVD and LSMDC datasets. To download the datasets, refer to this repository.

For LSMDC, you must obtain permission from MPII to download and use the data, so we do not provide the split and caption files in the data/ directory.

<a name="eval"/>

Evaluation

The following commands can be used to reproduce the main results of our paper using the supplied checkpoint files for each dataset. The commands will by default generate results for text-to-video retrieval (t2v). For video-to-text retrieval (v2t) results, add the argument --metric=v2t to the command.

If the outputs/ folder does not exist, first run mkdir outputs to create the directory. For each dataset, create a directory in outputs/ and store the corresponding checkpoint file. For each command below, replace {exp_name} with the name of that directory.

Also, replace {videos_dir} with the path to the dataset's videos.

For evaluation, you can change the batch_size without affecting results.

<a name="eval-commands"/>
DatasetCommandCheckpoint Filet2v R@1 Result
MSR-VTT-9kpython test.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --huggingface --load_epoch=-1 --dataset_name=MSRVTT --msrvtt_train_file=9kLink46.9
MSR-VTT-7kpython test.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --huggingface --load_epoch=-1 --dataset_name=MSRVTT --msrvtt_train_file=7kLink43.9
MSVDpython test.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --huggingface --load_epoch=-1 --dataset_name=MSVDLink47.2
LSMDCpython test.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --huggingface --load_epoch=-1 --dataset_name=LSMDCLink25.2
<a name="train"/>

Training

The following commands can be used to train our X-Pool model for each dataset. Again, the evaluation is by default set to generate results for text-to-video retrieval (t2v). For video-to-text retrieval (v2t) results, add the argument --metric=v2t to the command.

For each command below, replace {exp_name} with your choice name of experiment. Also, replace {videos_dir} with the path to the dataset's videos.

<a name="train-commands"/>
DatasetCommand
MSR-VTT-9kpython train.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --noclip_lr=3e-5 --transformer_dropout=0.3 --huggingface --dataset_name=MSRVTT --msrvtt_train_file=9k
MSR-VTT-7kpython train.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --noclip_lr=1e-5 --transformer_dropout=0.4 --huggingface --dataset_name=MSRVTT --msrvtt_train_file=7k
MSVDpython train.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --noclip_lr=1e-5 --transformer_dropout=0.4 --huggingface --dataset_name=MSVD
LSMDCpython train.py --exp_name={exp_name} --videos_dir={videos_dir} --batch_size=32 --noclip_lr=1e-5 --transformer_dropout=0.3 --huggingface --dataset_name=LSMDC
<a name="train-commands"/>

Citation

If you find this work useful in your research, please cite the following paper:

@inproceedings{gorti2022xpool,
  title={X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval},
  author={Gorti, Satya Krishna and Vouitsis, No{\"e}l and Ma, Junwei and Golestan, Keyvan and Volkovs, Maksims and Garg, Animesh and Yu, Guangwei},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}