Home

Awesome

LOGO

<b>A Toolkit for Evaluating Large Vision-Language Models. </b>

English | 简体中文 | 日本語

<a href="https://rank.opencompass.org.cn/leaderboard-multimodal">🏆 OC Learderboard </a><a href="#%EF%B8%8F-quickstart">🏗️Quickstart </a><a href="#-datasets-models-and-evaluation-results">📊Datasets & Models </a><a href="#%EF%B8%8F-development-guide">🛠️Development </a><a href="#-the-goal-of-vlmevalkit">🎯Goal </a><a href="#%EF%B8%8F-citation">🖊️Citation </a>

<a href="https://huggingface.co/spaces/opencompass/open_vlm_leaderboard">🤗 HF Leaderboard</a><a href="https://huggingface.co/datasets/VLMEval/OpenVLMRecords">🤗 Evaluation Records</a><a href="https://huggingface.co/spaces/opencompass/openvlm_video_leaderboard">🤗 HF Video Leaderboard</a><a href="https://discord.gg/evDT4GZmxN">🔊 Discord</a><a href="https://www.arxiv.org/abs/2407.11691">📝 Report</a>

</div>

VLMEvalKit (the python package name is vlmeval) is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.

🆕 News

We have presented a comprehensive survey on the evaluation of large multi-modality models, jointly with MME Team and LMMs-Lab 🔥🔥🔥

🏗️ QuickStart

See [QuickStart | 快速开始] for a quick start guide.

📊 Datasets, Models, and Evaluation Results

Evaluation Results

The performance numbers on our official multi-modal leaderboards can be downloaded from here!

OpenVLM Leaderboard: Download All DETAILED Results.

Supported Benchmarks

Supported Image Understanding Dataset

DatasetDataset Names (for run.py)TaskDatasetDataset Names (for run.py)Task
MMBench Series: <br>MMBench, MMBench-CN, CCBenchMMBench_DEV_[EN/CN] <br>MMBench_TEST_[EN/CN]<br>MMBench_DEV_[EN/CN]_V11<br>MMBench_TEST_[EN/CN]_V11<br>CCBenchMCQMMStarMMStarMCQ
MMEMMEY/NSEEDBench SeriesSEEDBench_IMG <br>SEEDBench2 <br>SEEDBench2_PlusMCQ
MM-VetMMVetVQAMMMUMMMU_[DEV_VAL/TEST]MCQ
MathVistaMathVista_MINIVQAScienceQA_IMGScienceQA_[VAL/TEST]MCQ
COCO CaptionCOCO_VALCaptionHallusionBenchHallusionBenchY/N
OCRVQA*OCRVQA_[TESTCORE/TEST]VQATextVQA*TextVQA_VALVQA
ChartQA*ChartQA_TESTVQAAI2DAI2D_[TEST/TEST_NO_MASK]MCQ
LLaVABenchLLaVABenchVQADocVQA+DocVQA_[VAL/TEST]VQA
InfoVQA+InfoVQA_[VAL/TEST]VQAOCRBenchOCRBenchVQA
RealWorldQARealWorldQAMCQPOPEPOPEY/N
Core-MM-CORE_MM (MTI)VQAMMT-BenchMMT-Bench_[VAL/ALL]<br>MMT-Bench_[VAL/ALL]_MIMCQ (MTI)
MLLMGuard -MLLMGuard_DSVQAAesBench+AesBench_[VAL/TEST]MCQ
VCR-wiki +VCR_[EN/ZH]_[EASY/HARD]_[ALL/500/100]VQAMMLongBench-Doc+MMLongBench_DOCVQA (MTI)
BLINKBLINKMCQ (MTI)MathVision+MathVision<br>MathVision_MINIVQA
MT-VQAMTVQA_TESTVQAMMDU+MMDUVQA (MTT, MTI)
Q-Bench1Q-Bench1_[VAL/TEST]MCQA-BenchA-Bench_[VAL/TEST]MCQ
DUDE+DUDEVQA (MTI)SlideVQA+SLIDEVQA<br>SLIDEVQA_MINIVQA (MTI)
TaskMeAnything ImageQA Random+TaskMeAnything_v1_imageqa_randomMCQMMMB and Multilingual MMBench+MMMB_[ar/cn/en/pt/ru/tr]<br>MMBench_dev_[ar/cn/en/pt/ru/tr]<br>MMMB<br>MTL_MMBench_DEV<br>PS: MMMB & MTL_MMBench_DEV <br>are all-in-one names for 6 langsMCQ
A-OKVQA+A-OKVQAMCQMuirBench+MUIRBenchMCQ
GMAI-MMBench+GMAI-MMBench_VALMCQTableVQABench+TableVQABenchVQA
MME-RealWorld+MME-RealWorld[-CN]<br/>MME-RealWorld-LiteMCQHRBench+HRBench[4K/8K]MCQ
MathVerse+MathVerse_MINI<br/>MathVerse_MINI_Vision_Only <br/>MathVerse_MINI_Vision_Dominant<br/>MathVerse_MINI_Vision_Intensive<br/>MathVerse_MINI_Text_Lite<br/>MathVerse_MINI_Text_DominantVQAAMBER+AMBERY/N
CRPE+CRPE_[EXIST/RELATION]VQAMMSearch$$^1$$--
R-Bench+R-Bench-[Dis/Ref]MCQWorldMedQA-V+WorldMedQA-VMCQ
GQA+GQA_TestDev_BalancedVQAMIA-Bench+MIA-BenchVQA
WildVision+WildVisionVQAOlympiadBench+OlympiadBenchVQA
MM-Math+MM-MathVQADynamathDynaMathVQA
MMGenBench-MMGenBench-Test<br>MMGenBench-Domain-QSpatial+QSpatial_[plus/scannet]VQA
VizWiz+VizWizVQAVisOnlyQA+VisOnlyQA-VLMEvalKitMCQ

* We only provide a subset of the evaluation results, since some VLMs do not yield reasonable results under the zero-shot setting

+ The evaluation results are not available yet

- Only inference is supported in VLMEvalKit (That includes the TEST splits of some benchmarks that do not include the ground truth answers).

$$^1$$ VLMEvalKit is integrated in its official repository.

VLMEvalKit will use a judge LLM to extract answer from the output if you set the key, otherwise it uses the exact matching mode (find "Yes", "No", "A", "B", "C"... in the output strings). The exact matching can only be applied to the Yes-or-No tasks and the Multi-choice tasks.

Supported Video Understanding Dataset

DatasetDataset Names (for run.py)TaskDatasetDataset Names (for run.py)Task
MMBench-VideoMMBench-VideoVQAVideo-MMEVideo-MMEMCQ
MVBenchMVBench/MVBench_MP4MCQMLVUMLVUMCQ & VQA
TempCompassTempCompassMCQ & Y/N & CaptionLongVideoBenchLongVideoBenchMCQ

Supported Models

Supported API Models

GPT-4v (20231106, 20240409) 🎞️🚅GPT-4o 🎞️🚅Gemini-1.0-Pro 🎞️🚅Gemini-1.5-Pro 🎞️🚅Step-1V 🎞️🚅
Reka-[Edge / Flash / Core]🚅Qwen-VL-[Plus / Max] 🎞️🚅<br>Qwen-VL-[Plus / Max]-0809 🎞️🚅Claude3-[Haiku / Sonnet / Opus] 🎞️🚅GLM-4v 🚅CongRong 🎞️🚅
Claude3.5-Sonnet (20240620, 20241022) 🎞️🚅GPT-4o-Mini 🎞️🚅Yi-Vision🎞️🚅Hunyuan-Vision🎞️🚅BlueLM-V 🎞️🚅
TeleMM🎞️🚅

Supported PyTorch / HF Models

IDEFICS-[9B/80B/v2-8B/v3-8B]-Instruct🚅🎞️InstructBLIP-[7B/13B]LLaVA-[v1-7B/v1.5-7B/v1.5-13B]MiniGPT-4-[v1-7B/v1-13B/v2-7B]
mPLUG-Owl[2/3]🎞️OpenFlamingo-v2🎞️PandaGPT-13BQwen-VL🚅🎞️ <br>Qwen-VL-Chat🚅🎞️
VisualGLM-6B🚅InternLM-XComposer-[1/2]🚅ShareGPT4V-[7B/13B]🚅TransCore-M
LLaVA (XTuner)🚅CogVLM-[Chat/Llama3]🚅ShareCaptioner🚅CogVLM-Grounding-Generalist🚅
Monkey🚅<br>Monkey-Chat🚅EMU2-Chat🚅🎞️Yi-VL-[6B/34B]MMAlaya🚅
InternLM-XComposer-2.5🚅🎞️MiniCPM-[V1/V2/V2.5/V2.6]🚅🎞️OmniLMM-12BInternVL-Chat-[V1-1/V1-2/V1-5/V2]🚅🎞️
DeepSeek-VL🎞️LLaVA-NeXT🚅🎞️Bunny-Llama3🚅XVERSE-V-13B
PaliGemma-3B 🚅360VL-70B 🚅Phi-3-Vision🚅🎞️<br>Phi-3.5-Vision🚅🎞️WeMM🚅
GLM-4v-9B 🚅Cambrian-[8B/13B/34B]LLaVA-Next-[Qwen-32B] 🎞️Chameleon-[7B/30B]🚅🎞️
Video-LLaVA-7B-[HF] 🎬VILA1.5-[3B/8B/13B/40B]🎞️Ovis[1.5-Llama3-8B/1.5-Gemma2-9B/1.6-Gemma2-9B/1.6-Llama3.2-3B/1.6-Gemma2-27B] 🚅🎞️Mantis-8B-[siglip-llama3/clip-llama3/Idefics2/Fuyu] 🎞️
Llama-3-MixSenseV1_1🚅Parrot-7B 🚅OmChat-v2.0-13B-sinlge-beta 🚅Video-ChatGPT 🎬
Chat-UniVi-7B[-v1.5] 🎬LLaMA-VID-7B 🎬VideoChat2-HD 🎬PLLaVA-[7B/13B/34B] 🎬
RBDash_72b 🚅🎞️xgen-mm-phi3-[interleave/dpo]-r-v1.5 🚅🎞️Qwen2-VL-[2B/7B/72B]🚅🎞️slime_[7b/8b/13b]🎞️
Eagle-X4-[8B/13B]🚅🎞️, <br>Eagle-X5-[7B/13B/34B]🚅🎞️Moondream1🚅, <br>Moondream2🚅XinYuan-VL-2B-Instruct🚅🎞️Llama-3.2-[11B/90B]-Vision-Instruct🚅
Kosmos2🚅H2OVL-Mississippi-[0.8B/2B]🚅🎞️Pixtral-12B🎞️Falcon2-VLM-11B🚅
MiniMonkey🚅🎞️LLaVA-OneVision🚅🎞️LLaVA-Video🚅🎞️Aquila-VL-2B🚅🎞️
Mini-InternVL-Chat-[2B/4B]-V1-5🚅🎞️InternVL2 Series 🚅🎞️Janus-1.3B🚅🎞️molmoE-1B/molmo-7B/molmo-72B🚅
Points-[Yi-1.5-9B/Qwen-2.5-7B]🚅NVLM🚅VIntern🚅🎞️Aria🚅🎞️

🎞️: Support multiple images as inputs.

🚅: Models can be used without any additional configuration/operation.

🎬: Support Video as inputs.

Transformers Version Recommendation:

Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:

Torchvision Version Recommendation:

Note that some VLMs may not be able to run under certain torchvision versions, we recommend the following settings to evaluate each VLM:

Flash-attn Version Recommendation:

Note that some VLMs may not be able to run under certain flash-attention versions, we recommend the following settings to evaluate each VLM:

# Demo
from vlmeval.config import supported_VLM
model = supported_VLM['idefics_9b_instruct']()
# Forward Single Image
ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
print(ret)  # The image features a red apple with a leaf on it.
# Forward Multiple Images
ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
print(ret)  # There are two apples in the provided images.

🛠️ Development Guide

To develop custom benchmarks, VLMs, or simply contribute other codes to VLMEvalKit, please refer to [Development_Guide | 开发指南].

Call for contributions

To promote the contribution from the community and share the corresponding credit (in the next report update):

Here is a contributor list we curated based on the records.

🎯 The Goal of VLMEvalKit

The codebase is designed to:

  1. Provide an easy-to-use, opensource evaluation toolkit to make it convenient for researchers & developers to evaluate existing LVLMs and make evaluation results easy to reproduce.
  2. Make it easy for VLM developers to evaluate their own models. To evaluate the VLM on multiple supported benchmarks, one just need to implement a single generate_inner() function, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.

The codebase is not designed to:

  1. Reproduce the exact accuracy number reported in the original papers of all 3rd party benchmarks. The reason can be two-fold:
    1. VLMEvalKit uses generation-based evaluation for all VLMs (and optionally with LLM-based answer extraction). Meanwhile, some benchmarks may use different approaches (SEEDBench uses PPL-based evaluation, eg.). For those benchmarks, we compare both scores in the corresponding result. We encourage developers to support other evaluation paradigms in the codebase.
    2. By default, we use the same prompt template for all VLMs to evaluate on a benchmark. Meanwhile, some VLMs may have their specific prompt templates (some may not covered by the codebase at this time). We encourage VLM developers to implement their own prompt template in VLMEvalKit, if that is not covered currently. That will help to improve the reproducibility.

🖊️ Citation

If you find this work helpful, please consider to star🌟 this repo. Thanks for your support!

Stargazers repo roster for @open-compass/VLMEvalKit

If you use VLMEvalKit in your research or wish to refer to published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.

@inproceedings{duan2024vlmevalkit,
  title={Vlmevalkit: An open-source toolkit for evaluating large multi-modality models},
  author={Duan, Haodong and Yang, Junming and Qiao, Yuxuan and Fang, Xinyu and Chen, Lin and Liu, Yuan and Dong, Xiaoyi and Zang, Yuhang and Zhang, Pan and Wang, Jiaqi and others},
  booktitle={Proceedings of the 32nd ACM International Conference on Multimedia},
  pages={11198--11201},
  year={2024}
}
<p align="right"><a href="#top">🔝Back to top</a></p>