Awesome
<div align="center">CoVR: Composed Video Retrieval
Learning Composed Video Retrieval from Web Video Captions
<a href="http://lucasventura.com/"><strong>Lucas Ventura</strong></a> Β· <a href="https://antoyang.github.io/"><strong>Antoine Yang</strong></a> Β· <a href="https://www.di.ens.fr/willow/people_webpages/cordelia/"><strong>Cordelia Schmid</strong></a> Β· <a href="https://imagine.enpc.fr/~varolg"><strong>GΓΌl Varol</strong></a>
</div> <div align="justify"></div>Composed Image Retrieval (CoIR) has recently gained popularity as a task that considers both text and image queries together, to search for relevant images in a database. Most CoIR approaches require manually annotated datasets, comprising image-text-image triplets, where the text describes a modification from the query image to the target image. However, manual curation of CoIR triplets is expensive and prevents scalability. In this work, we instead propose a scalable automatic dataset creation methodology that generates triplets given video-caption pairs, while also expanding the scope of the task to include composed video retrieval (CoVR). To this end, we mine paired videos with a similar caption from a large database, and leverage a large language model to generate the corresponding modification text. Applying this methodology to the extensive WebVid2M collection, we automatically construct our WebVid-CoVR dataset, resulting in 1.6 million triplets. Moreover, we introduce a new benchmark for CoVR with a manually annotated evaluation set, along with baseline results. Our experiments further demonstrate that training a CoVR model on our dataset effectively transfers to CoIR, leading to improved state-of-the-art performance in the zero-shot setup on both the CIRR and FashionIQ benchmarks. Our code, datasets, and models are publicly available.
Description
This repository contains the code for the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".
Please visit our webpage for more details.
This repository contains:
π¦ covr
β£ π configs # hydra config files
β£ π src # Pytorch datamodules
β£ π tools # scrips and notebooks
β£ π .gitignore
β£ π LICENSE
β£ π README.md
β£ π test.py
β π train.py
Installation :construction_worker:
<details><summary>Create environment</summary>  conda create --name covr
conda activate covr
To install the necessary packages, you can use the provided requirements.txt file:
python -m pip install -r requirements.txt
The code was tested on Python 3.10 and PyTorch 2.4.
</details> <details><summary>Download the datasets</summary>WebVid-CoVR
To use the WebVid-CoVR dataset, you will have to download the WebVid videos and the WebVid-CoVR annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh covr
To download the videos, install mpi4py
(conda install -c conda-forge mpi4py
) and run:
ln -s /path/to/your/datasets/folder datasets
python tools/scripts/download_covr.py --split=<train, val or test>
CC-CoIR
To use the CC-CoIR dataset, you will have to download the Conceptual Caption images and the CC-CoIR annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh coir
CIRR
To use the CIRR dataset, you will have to download the CIRR images and the CIRR annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh cirr
To download the images, follow the instructions in the CIRR repository. The default folder structure is the following:
π¦ CoVR
β£ π datasets
β β£ π CIRR
β β β£ π images
β β β β£ π train
β β β β£ π dev
β β β β π test1
FashionIQ
To use the FashionIQ dataset, you will have to download the FashionIQ images and the FashionIQ annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh fiq
To download the images, the urls are in the FashionIQ repository. You can use the this script to download the images. Some missing images can also be found here. All the images should be placed in the same folder (datasets/fashion-iq/images
).
CIRCO
To use the CIRCO dataset, download both the CIRCO images and the CIRCO annotations. Follow the structure provided in the CIRCO respository and place the files in the datasets/
directory.
To download the checkpoints, run:
bash tools/scripts/download_pretrained_models.sh
</details>
Usage :computer:
<details><summary>Computing BLIP embeddings</summary>  Before training, you will need to compute the BLIP embeddings for the videos/images. To do so, run:
# This will compute the BLIP embeddings for the WebVid-CoVR videos.
# Note that you can use multiple GPUs with --num_shards and --shard_id
python tools/embs/save_blip_embs_vids.py --video_dir datasets/WebVid/2M/train --todo_ids annotation/webvid-covr/webvid2m-covr_train.csv
# This will compute the BLIP embeddings for the WebVid-CoVR-Test videos.
python tools/embs/save_blip_embs_vids.py --video_dir datasets/WebVid/8M/train --todo_ids annotation/webvid-covr/webvid8m-covr_test.csv
# This will compute the BLIP embeddings for the CIRR images.
python tools/embs/save_blip_embs_imgs.py --image_dir datasets/CIRR/images/
# This will compute the BLIP embeddings for FashionIQ images.
python tools/embs/save_blip_embs_imgs.py --image_dir datasets/fashion-iq/images/
# This will compute the BLIP embeddings for the WebVid-CoVR modifications text. Only needed if using the caption retrieval loss (model/loss_terms=si_ti+si_tc).
python tools/embs/save_blip_embs_txts.py annotation/webvid-covr/webvid2m-covr_train.csv datasets/WebVid/2M/blip-vid-embs-large-all
β
</details> <details><summary>Computing BLIP-2 embeddings</summary>  Before training, you will need to compute the BLIP-2 embeddings for the videos/images. To do so, run:
# This will compute the BLIP-2 embeddings for the WebVid-CoVR videos.
# Note that you can use multiple GPUs with --num_shards and --shard_id
python tools/embs/save_blip2_embs_vids.py --video_dir datasets/WebVid/2M/train --todo_ids annotation/webvid-covr/webvid2m-covr_train.csv
# This will compute the BLIP-2 embeddings for the WebVid-CoVR-Test videos.
python tools/embs/save_blip2_embs_vids.py --video_dir datasets/WebVid/8M/train --todo_ids annotation/webvid-covr/webvid8m-covr_test.csv
# This will compute the BLIP-2 embeddings for the CIRR images.
python tools/embs/save_blip2_embs_imgs.py --image_dir datasets/CIRR/images/
# This will compute the BLIP-2 embeddings for FashionIQ images.
python tools/embs/save_blip2_embs_imgs.py --image_dir datasets/fashion-iq/images/
# This will compute the BLIP-2 embeddings for the WebVid-CoVR modifications text. Only needed if using the caption retrieval loss (model/loss_terms=si_ti+si_tc).
python tools/embs/save_blip2_embs_txts.py annotation/webvid-covr/webvid2m-covr_train.csv datasets/WebVid/2M/blip2-vid-embs-large-all
β
</details> <details><summary>Training</summary>  The command to launch a training experiment is the folowing:
python train.py [OPTIONS]
The parsing is done by using the powerful Hydra library. You can override anything in the configuration by passing arguments like foo=value
or foo.bar=value
. See Options parameters section at the end of this README for more details.
β
</details> <details><summary>Evaluating</summary>  The command to evaluate is the folowing:
python test.py test=<test> [OPTIONS]
β
</details> <details><summary>Options parameters</summary>Datasets:
data=webvid-covr
: WebVid-CoVR datasets.data=cirr
: CIRR dataset.data=fashioniq
: FashionIQ dataset.data=cc-coir
: CC-CoIR dataset.data=cc-coir+webvid-covr
: WebVid-CoVR and CC-CoIR dataset.
Models:
model=blip-large
: BLIP model.model=blip2-coco
: BLIP-2 model. Needs to be used in conjunction withmodel/ckpt=blip2-l-coco
or BLIP-2 checkpoint.
Tests:
test=all
: Test on WebVid-CoVR, CIRR and all three Fashion-IQ test sets.test=webvid-covr
: Test on WebVid-CoVR.test=cirr
: Test on CIRR.test=fashioniq
: Test on all three Fashion-IQ test sets (dress
,shirt
andtoptee
).test=circo
: Test on CIRCO.
Checkpoints:
model/ckpt=blip-l-coco
: Default checkpoint for BLIP-L finetuned on COCO.model/ckpt=webvid-covr
: Default checkpoint for CoVR finetuned on WebVid-CoVR.model/ckpt=fashioniq-all-ft_covr
: Default checkpoint pretrained on WebVid-CoVR and finetuned on FashionIQ.model/ckpt=cirr_ft-covr+gt
: Default checkpoint pretrained on WebVid-CoVR and finetuned on CIRR.model/ckpt=blip2-l-coco
: Default checkpoint for BLIP-2 L finetuned on COCO.model/ckpt=blip2-l-coco_coir
: Default checkpoint for BLIP-2 L pretrained on COCO and finetuned on CC-CoIR.model/ckpt=blip2-l-coco_coir+covr
: Default checkpoint for BLIP-2 L pretrained on COCO, finetuned on CC-CoIR and WebVid-CoVR.
Training
trainer=gpu
: training with CUDA, changedevices
to the number of GPUs you want to use.trainer=ddp
: training with Distributed Data Parallel (DDP), changedevices
andnum_nodes
to the number of GPUs and number of nodes you want to use.trainer=cpu
: training on the CPU (not recommended).
Logging
trainer/logger=csv
: log the results in a csv file. Very basic functionality.trainer/logger=wandb
: log the results in wandb. This requires to installwandb
and to set up your wandb account. This is what we used to log our experiments.trainer/logger=<other>
: Other loggers (not tested).
Machine
machine=server
: You can change the default path to the dataset folder and the batch size. You can create your own machine configuration by adding a new file inconfigs/machine
.
Experiment
There are many pre-defined experiments from the paper in configs/experiment
and configs/experiment2
. Simply add experiment=<experiment>
or experiment2=<experiment>
to the command line to use them.
β
</details>Citation
If you use this dataset and/or this code in your work, please cite our paper:
@article{ventura24covr,
title = {{CoVR}: Learning Composed Video Retrieval from Web Video Captions},
author = {Lucas Ventura and Antoine Yang and Cordelia Schmid and G{\"u}l Varol},
journal = {AAAI},
year = {2024}
}
@article{ventura24covr2,
title = {{CoVR-2}: Automatic Data Construction for Composed Video Retrieval},
author = {Lucas Ventura and Antoine Yang and Cordelia Schmid and G{\"u}l Varol},
journal = {IEEE TPAMI},
year = {2024}
}
Acknowledgements
Based on BLIP and lightning-hydra-template.