Home

Awesome

<div align="center">

CoVR: Composed Video Retrieval

Learning Composed Video Retrieval from Web Video Captions

<a href="http://lucasventura.com/"><strong>Lucas Ventura</strong></a> Β· <a href="https://antoyang.github.io/"><strong>Antoine Yang</strong></a> Β· <a href="https://www.di.ens.fr/willow/people_webpages/cordelia/"><strong>Cordelia Schmid</strong></a> Β· <a href="https://imagine.enpc.fr/~varolg"><strong>GΓΌl Varol</strong></a>

AAAI 2024 TPAMI 2024

arXiv Project Page License GitHub Stars

PWC <br/> PWC <br/> PWC

CoVR teaser gif

</div> <div align="justify">

Composed Image Retrieval (CoIR) has recently gained popularity as a task that considers both text and image queries together, to search for relevant images in a database. Most CoIR approaches require manually annotated datasets, comprising image-text-image triplets, where the text describes a modification from the query image to the target image. However, manual curation of CoIR triplets is expensive and prevents scalability. In this work, we instead propose a scalable automatic dataset creation methodology that generates triplets given video-caption pairs, while also expanding the scope of the task to include composed video retrieval (CoVR). To this end, we mine paired videos with a similar caption from a large database, and leverage a large language model to generate the corresponding modification text. Applying this methodology to the extensive WebVid2M collection, we automatically construct our WebVid-CoVR dataset, resulting in 1.6 million triplets. Moreover, we introduce a new benchmark for CoVR with a manually annotated evaluation set, along with baseline results. Our experiments further demonstrate that training a CoVR model on our dataset effectively transfers to CoIR, leading to improved state-of-the-art performance in the zero-shot setup on both the CIRR and FashionIQ benchmarks. Our code, datasets, and models are publicly available.

</div>

Description

This repository contains the code for the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".

Please visit our webpage for more details.

This repository contains:

πŸ“¦ covr
 ┣ πŸ“‚ configs                 # hydra config files
 ┣ πŸ“‚ src                     # Pytorch datamodules
 ┣ πŸ“‚ tools                   # scrips and notebooks
 ┣ πŸ“œ .gitignore
 ┣ πŸ“œ LICENSE
 ┣ πŸ“œ README.md
 ┣ πŸ“œ test.py
 β”— πŸ“œ train.py

Installation :construction_worker:

<details><summary>Create environment</summary> &emsp;
conda create --name covr
conda activate covr

To install the necessary packages, you can use the provided requirements.txt file:

python -m pip install -r requirements.txt

The code was tested on Python 3.10 and PyTorch 2.4.

</details> <details><summary>Download the datasets</summary>

WebVid-CoVR

To use the WebVid-CoVR dataset, you will have to download the WebVid videos and the WebVid-CoVR annotations.

To download the annotations, run:

bash tools/scripts/download_annotation.sh covr

To download the videos, install mpi4py (conda install -c conda-forge mpi4py) and run:

ln -s /path/to/your/datasets/folder datasets
python tools/scripts/download_covr.py --split=<train, val or test>

CC-CoIR

To use the CC-CoIR dataset, you will have to download the Conceptual Caption images and the CC-CoIR annotations.

To download the annotations, run:

bash tools/scripts/download_annotation.sh coir

CIRR

To use the CIRR dataset, you will have to download the CIRR images and the CIRR annotations.

To download the annotations, run:

bash tools/scripts/download_annotation.sh cirr

To download the images, follow the instructions in the CIRR repository. The default folder structure is the following:

πŸ“¦ CoVR
 ┣ πŸ“‚ datasets  
 ┃ ┣ πŸ“‚ CIRR
 ┃ ┃ ┣ πŸ“‚ images
 ┃ ┃ ┃ ┣ πŸ“‚ train
 ┃ ┃ ┃ ┣ πŸ“‚ dev
 ┃ ┃ ┃ β”— πŸ“‚ test1

FashionIQ

To use the FashionIQ dataset, you will have to download the FashionIQ images and the FashionIQ annotations.

To download the annotations, run:

bash tools/scripts/download_annotation.sh fiq

To download the images, the urls are in the FashionIQ repository. You can use the this script to download the images. Some missing images can also be found here. All the images should be placed in the same folder (datasets/fashion-iq/images).

CIRCO

To use the CIRCO dataset, download both the CIRCO images and the CIRCO annotations. Follow the structure provided in the CIRCO respository and place the files in the datasets/ directory.

</details> <details><summary>(Optional) Download pre-trained models</summary>

To download the checkpoints, run:

bash tools/scripts/download_pretrained_models.sh
</details>

Usage :computer:

<details><summary>Computing BLIP embeddings</summary> &emsp;

Before training, you will need to compute the BLIP embeddings for the videos/images. To do so, run:

# This will compute the BLIP embeddings for the WebVid-CoVR videos. 
# Note that you can use multiple GPUs with --num_shards and --shard_id
python tools/embs/save_blip_embs_vids.py --video_dir datasets/WebVid/2M/train --todo_ids annotation/webvid-covr/webvid2m-covr_train.csv 

# This will compute the BLIP embeddings for the WebVid-CoVR-Test videos.
python tools/embs/save_blip_embs_vids.py --video_dir datasets/WebVid/8M/train --todo_ids annotation/webvid-covr/webvid8m-covr_test.csv 

# This will compute the BLIP embeddings for the CIRR images.
python tools/embs/save_blip_embs_imgs.py --image_dir datasets/CIRR/images/

# This will compute the BLIP embeddings for FashionIQ images.
python tools/embs/save_blip_embs_imgs.py --image_dir datasets/fashion-iq/images/

# This will compute the BLIP embeddings for the WebVid-CoVR modifications text. Only needed if using the caption retrieval loss (model/loss_terms=si_ti+si_tc).
python tools/embs/save_blip_embs_txts.py annotation/webvid-covr/webvid2m-covr_train.csv datasets/WebVid/2M/blip-vid-embs-large-all

 

</details> <details><summary>Computing BLIP-2 embeddings</summary> &emsp;

Before training, you will need to compute the BLIP-2 embeddings for the videos/images. To do so, run:

# This will compute the BLIP-2 embeddings for the WebVid-CoVR videos. 
# Note that you can use multiple GPUs with --num_shards and --shard_id
python tools/embs/save_blip2_embs_vids.py --video_dir datasets/WebVid/2M/train --todo_ids annotation/webvid-covr/webvid2m-covr_train.csv 

# This will compute the BLIP-2 embeddings for the WebVid-CoVR-Test videos.
python tools/embs/save_blip2_embs_vids.py --video_dir datasets/WebVid/8M/train --todo_ids annotation/webvid-covr/webvid8m-covr_test.csv 

# This will compute the BLIP-2 embeddings for the CIRR images.
python tools/embs/save_blip2_embs_imgs.py --image_dir datasets/CIRR/images/

# This will compute the BLIP-2 embeddings for FashionIQ images.
python tools/embs/save_blip2_embs_imgs.py --image_dir datasets/fashion-iq/images/

# This will compute the BLIP-2 embeddings for the WebVid-CoVR modifications text. Only needed if using the caption retrieval loss (model/loss_terms=si_ti+si_tc).
python tools/embs/save_blip2_embs_txts.py annotation/webvid-covr/webvid2m-covr_train.csv datasets/WebVid/2M/blip2-vid-embs-large-all

 

</details> <details><summary>Training</summary> &emsp;

The command to launch a training experiment is the folowing:

python train.py [OPTIONS]

The parsing is done by using the powerful Hydra library. You can override anything in the configuration by passing arguments like foo=value or foo.bar=value. See Options parameters section at the end of this README for more details.

 

</details> <details><summary>Evaluating</summary> &emsp;

The command to evaluate is the folowing:

python test.py test=<test> [OPTIONS]

 

</details> <details><summary>Options parameters</summary>

Datasets:

Models:

Tests:

Checkpoints:

Training

Logging

Machine

Experiment

There are many pre-defined experiments from the paper in configs/experiment and configs/experiment2. Simply add experiment=<experiment> or experiment2=<experiment> to the command line to use them.

 

</details>

Citation

If you use this dataset and/or this code in your work, please cite our paper:

@article{ventura24covr,
    title     = {{CoVR}: Learning Composed Video Retrieval from Web Video Captions},
    author    = {Lucas Ventura and Antoine Yang and Cordelia Schmid and G{\"u}l Varol},
    journal   = {AAAI},
    year      = {2024}
  }

@article{ventura24covr2,
  title = {{CoVR-2}: Automatic Data Construction for Composed Video Retrieval},
  author = {Lucas Ventura and Antoine Yang and Cordelia Schmid and G{\"u}l Varol},
  journal = {IEEE TPAMI},
  year = {2024}
}

Acknowledgements

Based on BLIP and lightning-hydra-template.