Awesome
Vision Document Retrieval (ViDoRe): Benchmark 👀
[Model card] [ViDoRe Leaderboard] [Demo] [Blog Post]
Approach
The Visual Document Retrieval Benchmark (ViDoRe), is introduced to evaluate the performance of document retrieval systems on visually rich documents across various tasks, domains, languages, and settings. It was used to evaluate the ColPali model, a VLM-powered retriever that efficiently retrieves documents based on their visual content and textual queries using a late-interaction mechanism.
[!TIP] If you want to fine-tune ColPali for your specific use-case, you should check the
colpali
repository. It contains with the whole codebase used to train the model presented in our paper.
Setup
We used Python 3.11.6 and PyTorch 2.2.2 to train and test our models, but the codebase is expected to be compatible with Python >=3.9 and recent PyTorch versions.
The eval codebase depends on a few Python packages, which can be downloaded using the following command:
pip install vidore-benchmark
[!TIP] By default, the
vidore-benchmark
package already includes the dependencies for the ColVision models (e.g. ColPali, ColQwen2...).
To keep a lightweight repository, only the essential packages were installed. In particular, you must specify the dependencies for the specific non-Transformers models you want to run (see the list in pyproject.toml
). For instance, if you are going to evaluate the BGE-M3 retriever:
pip install "vidore-benchmark[bge-m3]"
Or if you want to evaluate all the off-the-shelf retrievers:
pip install "vidore-benchmark[all-retrievers]"
Available retrievers
The list of available retrievers can be found here. Read this section to learn how to create, use, and evaluate your own retriever.
Command-line usage
Evaluate a retriever on ViDoRE
You can evaluate any off-the-shelf retriever on the ViDoRe benchmark. For instance, you can evaluate the ColPali model on the ViDoRe benchmark to reproduce the results from our paper.
vidore-benchmark evaluate-retriever \
--model-class colpali \
--model-name vidore/colpali-v1.2 \
--collection-name "vidore/vidore-benchmark-667173f98e70a1c0fa4db00d" \
--split test
Note: You should get a warning about some non-initialized weights. This is a known issue in ColPali and will cause the metrics to be slightly different from the ones reported in the paper. We are working on fixing this issue.
Alternatively, you can evaluate your model on a single dataset. If your retriver uses visual embeddings, you can use any dataset path from the ViDoRe Benchmark collection, e.g.:
vidore-benchmark evaluate-retriever \
--model-class colpali \
--model-name vidore/colpali-v1.2 \
--dataset-name vidore/docvqa_test_subsampled \
--split test
If you want to evaluate a retriever that relies on pure-text retrieval (no visual embeddings), you should use the datasets from the ViDoRe Chunk OCR (baseline) instead:
vidore-benchmark evaluate-retriever \
--model-class bge-m3 \
--model-name BAAI/bge-m3 \
--dataset-name vidore/docvqa_test_subsampled_tesseract \
--split test
Both scripts will generate one particular JSON file in outputs/{model_name_all_metrics.json}
. Follow the instructions on the ViDoRe Leaderboard to compare your model with the others.
Evaluate a retriever using token pooling
You can use token pooling to reduce the length of the document embeddings. In production, this will significantly reduce the memory footprint of the retriever, thus reducing costs and increasing speed. You can use the --use-token-pooling
flag to enable this feature:
vidore-benchmark evaluate-retriever \
--model-class colpali \
--model-name vidore/colpali-v1.2 \
--dataset-name vidore/docvqa_test_subsampled \
--split test \
--use-token-pooling \
--pool-factor 3
Retrieve the top-k documents from a HuggingFace dataset
vidore-benchmark retrieve-on-dataset \
--model-class colpali \
--model-name vidore/colpali-v1.2 \
--query "Which hour of the day had the highest overall electricity generation in 2019?" \
--k 5 \
--dataset-name vidore/syntheticDocQA_energy_test \
--split test
Retrieve the top-k documents from a collection of PDF documents
vidore-benchmark retriever_on_pdfs \
--model-class siglip \
--model-name google/siglip-so400m-patch14-384 \
--query "Which hour of the day had the highest overall electricity generation in 2019?" \
--k 5 \
--data-dirpath data/my_folder_with_pdf_documents/
Documentation
To get more information about the available options, run:
vidore-benchmark --help
Python usage
Quickstart example
from datasets import load_dataset
from dotenv import load_dotenv
from vidore_benchmark.evaluation import evaluate_dataset
from vidore_benchmark.retrievers.jina_clip_retriever import JinaClipRetriever
load_dotenv(override=True)
def main():
"""
Example script for a Python usage of the Vidore Benchmark.
"""
my_retriever = JinaClipRetriever("jinaai/jina-clip-v1")
dataset = load_dataset("vidore/syntheticDocQA_dummy", split="test")
metrics = evaluate_dataset(my_retriever, dataset, batch_query=4, batch_passage=4)
print(metrics)
Implement your own retriever
If you need to evaluate your own model on the ViDoRe benchmark, you can create your own instance of VisionRetriever
to use it with the evaluation scripts in this package. You can find the detailed instructions here.
Compare retrievers using the EvalManager
To easily process, visualize and compare the evaluation metrics of multiple retrievers, you can use the EvalManager
class. Assume you have a list of previously generated JSON metric files, e.g.:
data/metrics/
├── bisiglip.json
└── colpali.json
The data is stored in eval_manager.data
as a multi-column DataFrame with the following columns. Use the get_df_for_metric
, get_df_for_dataset
, and get_df_for_model
methods to get the subset of the data you are interested in. For instance:
from vidore_benchmark.evaluation import EvalManager
eval_manager = EvalManager.from_dir("data/metrics/")
df = eval_manager.get_df_for_metric("ndcg_at_5")
Citation
ColPali: Efficient Document Retrieval with Vision Language Models
Authors: Manuel Faysse*, Hugues Sibille*, Tony Wu*, Bilel Omrani, Gautier Viaud, Céline Hudelot, Pierre Colombo (* denotes equal contribution)
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
If you want to reproduce the results from the ColPali paper, please read the REPRODUCIBILITY.md
file for more information.