Awesome
Partially Relevant Video Retrieval
Source code of our ACM MM'2022 paper Partially Relevant Video Retrieval.
Homepage of our paper http://danieljf24.github.io/prvr/.
<img src="https://github.com/HuiGuanLab/ms-sl/blob/main/figures/pvr_model.png" width="1100px">Table of Contents
Environments
- python 3.8
- pytorch 1.9.0
- torchvision 0.10.0
- tensorboard 2.6.0
- tqdm 4.62.0
- easydict 1.9
- h5py 2.10.0
- cuda 11.1
We used Anaconda to setup a deep learning workspace that supports PyTorch. Run the following script to install the required packages.
conda create --name ms_sl python=3.8
conda activate ms_sl
git clone https://github.com/HuiGuanLab/ms-sl.git
cd ms-sl
pip install -r requirements.txt
conda deactivate
MS-SL on TVR
Required Data
The data can be downloaded from Baidu pan or Google drive. Please refer to here for more description of the dataset. Run the following script to place the data in the specified path.
# download the data of TVR
ROOTPATH=$HOME/VisualSearch
mkdir -p $ROOTPATH && cd $ROOTPATH
unzip tvr.zip -d $ROOTPATH
Training
Run the following script to train MS-SL
network on TVR. It will save the chechpoint that performs best on the validation set as the final model.
#Add project root to PYTHONPATH (Note that you need to do this each time you start a new session.)
source setup.sh
conda activate ms-sl
ROOTPATH=$HOME/VisualSearch
RUN_ID=runs_0
GPU_DEVICE_ID=0
./do_tvr.sh $RUN_ID $ROOTPATH $GPU_DEVICE_ID
$RUN_ID
is the name of the folder where the model is saved in.
$GPU_DEVICE_ID
is the index of the GPU where we train on.
Evaluation
The model is placed in the directory $ROOTPATH/$DATASET/results/$MODELDIR after training. To evaluate it, please run the following script:
DATASET=tvr
FEATURE=i3d_resnet
ROOTPATH=$HOME/VisualSearch
MODELDIR=tvr-runs_0-2022_07_11_20_27_02
./do_test.sh $DATASET $FEATURE $ROOTPATH $MODELDIR
We also provide the trained checkpoint on TVR, run the following script to evaluate it. The model can also be downloaded from Here or Google drive.
DATASET=tvr
FEATURE=i3d_resnet
ROOTPATH=$HOME/VisualSearch
MODELDIR=checkpoint_tvr
tar -xvf checkpoint_tvr.tar -C $ROOTPATH/$DATASET/results
./do_test.sh $DATASET $FEATURE $ROOTPATH $MODELDIR
$DATASET
is the dataset that the model trained and evaluate on.
$FEATURE
is the video feature corresponding to the dataset.
$MODELDIR
is the path of checkpoints saved.
Expected performance
R@1 | R@5 | R@10 | R@100 | SumR | |
---|---|---|---|---|---|
Text-to-Video | 13.5 | 32.1 | 43.4 | 83.4 | 172.3 |
MS-SL on Activitynet
Required Data
The data can be downloaded from Baidu pan or Google drive. Please refer to here for more description of the dataset. Run the following script to place the data in the specified path.
ROOTPATH=$HOME/VisualSearch
mkdir -p $ROOTPATH && cd $ROOTPATH
unzip activitynet.zip -d $ROOTPATH
Training
Run the following script to train MS-SL
network on Activitynet.
#Add project root to PYTHONPATH (Note that you need to do this each time you start a new session.)
source setup.sh
conda activate ms-sl
ROOTPATH=$HOME/VisualSearch
RUN_ID=runs_0
GPU_DEVICE_ID=0
./do_activitynet.sh $RUN_ID $ROOTPATH $GPU_DEVICE_ID
Evaluation
The model is placed in the directory $ROOTPATH/$DATASET/results/$MODELDIR after training. To evaluate it, please run the following script:
DATASET=activitynet
FEATURE=i3d
ROOTPATH=$HOME/VisualSearch
MODELDIR=activitynet-runs_0-2022_07_11_20_27_02
./do_test.sh $DATASET $FEATURE $ROOTPATH $MODELDIR
We also provide the trained checkpoint on Activitynet, run the following script to evaluate it. The model can also be downloaded from Here or Google drive.
DATASET=activitynet
FEATURE=i3d
ROOTPATH=$HOME/VisualSearch
MODELDIR=checkpoint_activitynet
tar -xvf checkpoint_activitynet.tar -C $ROOTPATH/$DATASET/results
./do_test.sh $DATASET $FEATURE $ROOTPATH $MODELDIR
Expected performance
R@1 | R@5 | R@10 | R@100 | SumR | |
---|---|---|---|---|---|
Text-to-Video | 7.1 | 22.5 | 34.7 | 75.8 | 140.1 |
MS-SL on Charades-STA
Required Data
The data can be downloaded from Baidu pan or Google drive. Please refer to here for more description of the dataset. Run the following script to place the data in the specified path.
ROOTPATH=$HOME/VisualSearch
mkdir -p $ROOTPATH && cd $ROOTPATH
unzip charades.zip -d $ROOTPATH
Training
Run the following script to train MS-SL
network on Charades-STA.
#Add project root to PYTHONPATH (Note that you need to do this each time you start a new session.)
source setup.sh
conda activate ms-sl
ROOTPATH=$HOME/VisualSearch
RUN_ID=runs_0
GPU_DEVICE_ID=0
./do_charades.sh $RUN_ID $ROOTPATH $GPU_DEVICE_ID
Evaluation
The model is placed in the directory $ROOTPATH/$DATASET/results/$MODELDIR after training. To evaluate it, please run the following script:
DATASET=charades
FEATURE=i3d_rgb_lgi
ROOTPATH=$HOME/VisualSearch
MODELDIR=charades-runs_0-2022_07_11_20_27_02
./do_test.sh $DATASET $FEATURE $ROOTPATH $MODELDIR
We also provide the trained checkpoint on Charades-STA, run the following script to evaluate it. The model can also be downloaded from Here or Google drive.
DATASET=charades
FEATURE=i3d_rgb_lgi
ROOTPATH=$HOME/VisualSearch
MODELDIR=checkpoint_charades
tar -xvf checkpoint_charades.tar -C $ROOTPATH/$DATASET/results
./do_test.sh $DATASET $FEATURE $ROOTPATH $MODELDIR
Expected performance
R@1 | R@5 | R@10 | R@100 | SumR | |
---|---|---|---|---|---|
Text-to-Video | 1.8 | 7.1 | 11.8 | 47.7 | 68.4 |
Reference
@inproceedings{dong2022prvr,
title = {Partially Relevant Video Retrieval},
author = {Jianfeng Dong and Xianke Chen and Minsong Zhang and Xun Yang and Shujie Chen and Xirong Li and Xun Wang},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
year = {2022},
}
Acknowledgement
The codes are modified from TVRetrieval and ReLoCLNet.
This work was supported by the National Key R&D Program of China (2018YFB1404102), NSFC (62172420,61902347, 61976188, 62002323), the Public Welfare Technology Research Project of Zhejiang Province (LGF21F020010), the Open Projects Program of the National Laboratory of Pattern Recognition, the Fundamental Research Funds for the Provincial Universities of Zhejiang, and Public Computing Cloud of RUC.