Home

Awesome

Best Practice

We strongly recommend using VLMEevalKit for its useful features and ready-to-use LVLM implementations.

MMIU

<p align="left"> <a href="#🚀-quick-start"><b>Quick Start</b></a> | <a href="https://mmiu-bench.github.io/"><b>HomePage</b></a> | <a href="https://arxiv.org/abs/2408.02718"><b>arXiv</b></a> | <a href="https://huggingface.co/datasets/FanqingM/MMIU-Benchmark"><b>Dataset</b></a> | <a href="#🖊️-citation"><b>Citation</b></a> <br> </p>

This repository is the official implementation of MMIU.

MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models
Fanqing Meng<sup>*</sup>, Jin Wang<sup>*</sup>, Chuanhao Li<sup>*</sup>, Quanfeng Lu, Hao Tian, Jiaqi Liao, Xizhou Zhu, Jifeng Dai, Yu Qiao, Ping Luo, Kaipeng Zhang<sup>#</sup>, Wenqi Shao<sup>#</sup>
<sup>*</sup> MFQ, WJ and LCH contribute equally.
<sup>#</sup> SWQ (shaowenqi@pjlab.org.cn) and ZKP (zhangkaipeng@pjlab.org.cn) are correponding authors.

💡 News

Introduction

Multimodal Multi-image Understanding (MMIU) benchmark, a comprehensive evaluation suite designed to assess LVLMs across a wide range of multi-image tasks. MMIU encompasses 7 types of multi-image relationships, 52 tasks, 77K images, and 11K meticulously curated multiple-choice questions, making it the most extensive benchmark of its kind. overview

Evaluation Results Overview

🏆 Leaderboard

RankModelScore
1GPT4o55.72
2Gemini53.41
3Claude353.38
4InternVL250.30
5Mantis45.58
6Gemini1.040.25
7internvl1.5-chat37.39
8Llava-interleave32.37
9idefics2_8b27.80
10glm-4v-9b27.02
11deepseek_vl_7b24.64
12XComposer2_1.8b23.46
13deepseek_vl_1.3b23.21
14flamingov222.26
15llava_next_vicuna_7b22.25
16XComposer221.91
17MiniCPM-Llama3-V-2_521.61
18llava_v1.5_7b19.19
19sharegpt4v_7b18.52
20sharecaptioner16.10
21qwen_chat15.92
22monkey-chat13.74
23idefics_9b_instruct12.84
24qwen_base5.16
-Frequency Guess31.5
-Random Guess27.4

🚀 Quick Start

Here, we mainly use the VLMEvalKit framework for testing, with some separate tests as well. Specifically, for multi-image models, we include the following models:

transformers == 33.0

transformers == 37.0

transformers == 40.0

For single-image models, we include the following:

transformers == 33.0

transformers == 37.0

transformers == 40.0

We use the VLMEvalKit framework for testing. You can refer to the code in VLMEvalKit/test_models.py. Additionally, for closed-source models, please replace the following part of the code by following the example here:

response = model.generate(tmp) # tmp = image_paths + [question]

For other open-source models, we have provided reference code for Mantis and InternVL1.5-chat. For LLava-Interleave, please refer to the original repository.

💐 Acknowledgement

We expressed sincerely gratitude for the projects listed following:

📧 Contact

If you have any questions, feel free to contact Fanqing Meng with mengfanqing33@gmail.com

🖊️ Citation

If you feel MMIU useful in your project or research, please kindly use the following BibTeX entry to cite our paper. Thanks!

@article{meng2024mmiu,
  title={MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models},
  author={Meng, Fanqing and Wang, Jin and Li, Chuanhao and Lu, Quanfeng and Tian, Hao and Liao, Jiaqi and Zhu, Xizhou and Dai, Jifeng and Qiao, Yu and Luo, Ping and others},
  journal={arXiv preprint arXiv:2408.02718},
  year={2024}
}