Awesome
Benchmarks of MLLMs: Survey
<sup>1</sup>Tencent, <sup>2</sup>PKU, <sup>2</sup>NUS, <sup>2</sup>SEU, <sup>2</sup>NJU
⚡We will actively maintain this repository and incorporate new research as it emerges. If you have any questions, please contact swordli@tencent.com. Welcome to collaborate on academic research and writing papers together.
📌 What is This Survey About?
<p align="center"> <img src="BMLLM_statistic.png" width="100%" height="100%"> </p>Multimodal Large Language Models (MLLMs) are gaining increasing popularity in both academia and industry due to their remarkable performance in various applications such as visual question answering, visual perception, understanding, and reasoning. Over the past few years, significant efforts have been made to examine MLLMs from multiple perspectives. This paper presents a comprehensive review of 200+ benchmarks and evaluations for MLLMs, focusing on (1)perception and understanding, (2)cognition and reasoning, (3)specific domains, (4)key capabilities, and (5)other modalities. Finally, we discuss the limitations of the current evaluation methods for MLLMs and explore promising future directions. Our key argument is that evaluation should be regarded as a crucial discipline to better support the development of MLLMs.
Summary of 200 MLLM Benchmarks
Perception&Understanding
Comprehensive Evaluation
- <mark>MDVP-Bench</mark> "Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want". Lin W, Wei X, An R, et al.. arXiv 2024. [Paper] [Github].
- <mark>ChEF</mark> "CHEF: A COMPREHENSIVE EVALUATION FRAMEWORK FOR STANDARDIZED ASSESSMENT OF MULTIMODAL LARGE LANGUAGE MODELS". Shi Z, Wang Z, Fan H, et al.. arXiv 2023. [Paper] [Github].
- <mark>UniBench</mark> "UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling". Al-Tahan H, Garrido Q, Balestriero R, et al.. arXiv 2024. [Paper] [Github].
- <mark>MME</mark> "MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models". Fu C, Chen P, Shen Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>MM-Vet</mark> "MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities". Yu W, Yang Z, Li L, et al.. arXiv 2023. [Paper] [Github].
- <mark>TouchStone</mark> "TouchStone: Evaluating Vision-Language Models by Language Models". Bai S, Yang S, Bai J, et al.. arXiv 2023. [Paper] [Github].
- <mark>MMBench</mark> "MMBench: Is Your Multi-modal Model an All-around Player?". Liu Y, Duan H, Zhang Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>OwlEval</mark> "mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality". Ye Q, Xu H, Xu G, et al.. arXiv 2024. [Paper] [Github].
- <mark>Open-VQA</mark> "What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?". Zeng Y, Zhang H, Zheng J, et al.. arXiv 2023. [Paper] [Github].
- <mark>SEED-Bench</mark> "SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension". Li B, Wang R, Wang G, et al.. arXiv 2023. [Paper] [Github].
- <mark>SEED-Bench-2</mark> "SEED-Bench-2: Benchmarking Multimodal Large Language Models". Li B, Ge Y, Ge Y, et al.. arXiv 2023. [Paper] [Github].
- <mark>LLaVA-Bench</mark> "Visual Instruction Tuning". Liu H, Li C, Wu Q, et al.. arXiv 2023. [Paper] [Github].
- <mark>LAMM</mark> "LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark". Yin Z, Wang J, Cao J, et al.. arXiv 2023. [Paper] [Github].
Fine-grained Perception
Visual Grounding and Object Detection
- <mark>CODE</mark> "Contextual Object Detection with Multimodal Large Language Models". Zang Y, Li W, Han J, et al.. arXiv 2023. [Paper] [Github].
- <mark>Flickr30k Entities</mark> "Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models". Plummer B. A, Wang L, Cervantes C. M, et al.. arXiv 2016. [Paper] [Github].
- <mark>Visual7W</mark> "Visual7W: Grounded Question Answering in Images". Zhu Y, Groth O, Bernstein M, et al.. CVPR 2016. [Paper] [Github].
- <mark>V*Bench</mark> "V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs". Wu P, Xie S, et al.. arXiv 2023. [Paper] [Github].
- <mark>Grounding-Bench</mark> "LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models". Zhang H, Li H, Li F, et al.. arXiv 2023. [Paper] [Github].
Fine-grained Identification and Recognition
- <mark>GVT-Bench</mark> "What Makes for Good Visual Tokenizers for Large Language Models?". Wang G, Ge Y,Ding X, et al.. arXiv 2023. [Paper] [Github].
- <mark>V* Bench</mark> "V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs". Wu P, Xie S.. arXiv 2023. [Paper] [Github].
- <mark>MMVP</mark> "Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs". Tong S,Liu Z,Zhai Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>CV-Bench</mark> "Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs". Tong S,Brown E,Wu P, et al.. arXiv 2024. [Paper] [Github].
- <mark>P2GB</mark> "Plug-and-Play Grounding of Reasoning in Multimodal Large Language Models". Chen J, Liu Y, Li D, et al.. arXiv 2024. [Paper] [Github].
- <mark>Visual CoT</mark> "Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning". Shao H, Qian S, Xiao H, et al.. arXiv 2024. [Paper] [Github].
- <mark>MagnifierBench</mark> "OtterHD: A High-Resolution Multi-modality Model". Li B, Zhang P, Yang J, et al.. arXiv 2023. [Paper] [Github].
- <mark>HR-Bench</mark> "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models". Wang W, Ding L,Zeng M, et al.. arXiv 2024. [Paper] [Github].
- <mark>SPARK</mark> "SPARK: Multi-Vision Sensor Perception and Reasoning Benchmark for Large-scale Vision-Language Models". Yu Y,Chung S,Lee B, et al.. arXiv 2024. [Paper] [Github].
Nuanced Vision-language Alignment
- <mark>Eqben</mark> "Equivariant Similarity for Vision-Language Foundation Models". Wang T, Lin, K. Li L,Lin C, et al.. ICCV 2023. [Paper] [Github].
- <mark>SPEC</mark> "Asclepius: A Spectrum Evaluation Benchmark for Medical Multi-Modal Large Language Models". Wang W, Su Y, Huan Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>VALSE</mark> "When and why vision-language models behave like bags-of-words, and what to do about it?". Yuksekgonul M, Bianchi F, Kalluri P, et al.. ICLR 2023. [Paper] [Github].
- <mark>VL-Checklist</mark> "VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations". Zhao T, Zhang T, Zhu M, et al.. arXiv 2023. [Paper] [Github].
- <mark>Winoground</mark> "Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality". Thrush T, Jiang R, Bartolo M, et al.. CVPR 2022. [Paper] [Github].
- <mark>ARO</mark> "When and why vision-language models behave like bags-of-words, and what to do about it?". Yuksekgonul M, Bianchi F, Kalluri P, et al.. ICLR 2023. [Paper] [Github].
Image Understanding
Multi-image Understanding
- <mark>Mementos</mark> "Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences". Wang X, Zhou Y, Liu X, et al.. arXiv 2024. [Paper] [Github].
- <mark>MileBench</mark> "MileBench: Benchmarking MLLMs in Long Context". Song D, Chen S, Chen G, et al.. arXiv 2024. [Paper] [Github].
- <mark>MuirBench</mark> "MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding". Wang F, Fu X, Huang J, et al.. arXiv 2024. [Paper] [Github].
- <mark>CompBench</mark> "CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs". Kil J, Mai Z, Lee J, et al.. arXiv 2024. [Paper] [Github].
- <mark>MMIU</mark> "MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models". Meng F, Wang J, Li C, et al.. arXiv 2024. [Paper] [Github].
Implication Understanding
- <mark>II-Bench</mark> "II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models". Liu Z, Fang F, Feng X, et al.. arXiv 2024. [Paper] [Github].
- <mark>ImplicitAVE</mark> "ImplicitAVE: An Open-Source Dataset and Multimodal LLMs Benchmark for Implicit Attribute Value Extraction". Zou H, Samuel V, Zhou Y, et al.. ACL 2024. [Paper] [Github].
- <mark>FABA-Bench</mark> "Facial Affective Behavior Analysis with Instruction Tuning". Li Y, Dao A, Bao W, et al.. arXiv 2024. [Paper] [Github].
Image Quality and Aesthetics Perception
- <mark>AesBench</mark> "AesBench: An Expert Benchmark for Multimodal Large Language Models on Image Aesthetics Perception". Huang Y, Yuan Q, Sheng X, et al.. arXiv 2024. [Paper] [Github].
- <mark>UNIAA</mark> "UNIAA: A Unified Multi-modal Image Aesthetic Assessment Baseline and Benchmark". Zhou Z, Wang Q, Lin B, et al.. arXiv 2024. [Paper] [Github].
- <mark>DesignProbe</mark> "DesignProbe: A Graphic Design Benchmark for Multimodal Large Language Models". Lin J, Huang D, Zhao T, et al.. arXiv 2024. [Paper] [Github].
- <mark>Q-Bench</mark> "Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision". Wu H, Zhang Z, Zhang E, et al.. arXiv 2024. [Paper] [Github].
- <mark>Q-Bench+</mark> "A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs". Zhang Z, Wu H, Zhang E, et al.. TPAMI. [Paper] [Github].
Cognition&Reasoning
General Reasoning
Visual Relation
- <mark>MMRel</mark> "MMRel: A Relation Understanding Dataset and Benchmark in the MLLM Era". Nie J, Zhang G, An W, et al.. arXiv 2024. [Paper] [Github].
- <mark>What’sUp</mark> "What's "up" with vision-language models? Investigating their struggle with spatial reasoning". Kamath A, Hessel J, Chang K. EMNLP 2023. [Paper] [Github].
- <mark>GSR-BENCH</mark> "GSR-BENCH: A Benchmark for Grounded Spatial Reasoning Evaluation via Multimodal LLMs". Rajabi N, Kosecka J. arXiv 2024. [Paper] [Github].
- <mark>CRPE</mark> "The All-Seeing Project V2: Towards General Relation Comprehension of the Open World". Wang W, Ren Y, Luo H, et al.. ECCV 2024. [Paper] [Github].
- <mark>VSR</mark> "Visual Spatial Reasoning". Liu F, Emerson G, Collier N, et al.. arXiv 2022. [Paper] [Github].
- <mark>SpatialRGPT</mark> "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model". Cheng A, Yin H, Fu Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>MuCR</mark> "Multimodal Causal Reasoning Benchmark: Challenging Vision Large Language Models to Infer Causal Links Between Siamese Images". Li Z, Wang H, Liu D, et al.. arXiv 2024. [Paper] [Github].
Context-dependent Reasoning
- <mark>CODIS</mark> "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models". Luo F, Chen C, Wan Z, et al.. arXiv 2024. [Paper] [Github].
- <mark>CFMM</mark> "Eyes Can Deceive: Benchmarking Counterfactual Reasoning Abilities of Multi-modal Large Language Models". Li Y, Tian W, Jiao Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>VL-ICLBench</mark> "VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning". Zong Y, Bohdal O, Hospedales T, et al.. arXiv 2023. [Paper] [Github].
CoT Reasoning
- <mark>SCIENCEQA</mark> "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering". Lu P, Mishra S, Xia T, et al.. NIPS 2022. [Paper] [Github].
- <mark>VisualCoT</mark> "Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning". Shao H, Qian S, Xiao H, et al.. arXiv 2024. [Paper] [Github].
- <mark>M3CoT</mark> "M3CoT: A Novel Benchmark for Multi-Domain Multi-step Multi-modal Chain-of-Thought". Chen Q, Qin L, Zhang J, et al.. ACL2024. [Paper] [Github].
Vision-Indispensable Capabilities
- <mark>CLEVR</mark> "CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning". Johnson J, Hariharan B, Maaten L, et al.. arXiv 2016. [Paper] [Github].
- <mark>VQAv2</mark> "Visually Dehallucinative Instruction Generation: Know What You Don't Know". Cha S, Lee J, Lee Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>GQA</mark> "GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering". Hudson D, Manning C. CVPR 2019. [Paper] [Github].
- <mark>MMStar</mark> "Are We on the Right Way for Evaluating Large Vision-Language Models?". Chen L, Li J, Dong X, et al.. arXiv 2024. [Paper] [Github].
Knowledge-based Reasoning
Knowledge-based Visual Question Answering
- <mark>KB-VQA</mark> "Explicit Knowledge-based Reasoning for Visual Question Answering". Wang P, Wu Q, Shen C, et al.. arXiv 2015. [Paper] [Github].
- <mark>FVQA</mark> "FVQA: Fact-based Visual Question Answering". Wang P, Wu Q, Shen C, et al.. arXiv 2016. [Paper] [Github].
- <mark>OK-VQA</mark> "OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge". Marino K, Rastegari M, Farhadi A, et al.. CVPR 2019. [Paper] [Github].
- <mark>A-OKVQA</mark> "A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge". Schwenk D, Khandelwal A, Clark C, et al.. arXiv 2022. [Paper] [Github].
- <mark>SOK-Bench</mark> "SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge". Wang A, Wu B, Chen S, et al.. CVPR 2024. [Paper] [Github].
Knowledge Editing
- <mark>MMEdit</mark> "Can We Edit Multimodal Large Language Models?". Cheng S, Tian B, Liu Q, et al.. EMNLP 2023. [Paper] [Github].
- <mark>MIKE</mark> "MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge Editing". Li J, Du M, Zhang C, et al.. arXiv 2024. [Paper] [Github].
- <mark>VLKEB</mark> "VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark". Huang H, Zhong H, Yu T, et al.. arXiv 2024. [Paper] [Github].
- <mark>MC-MKE</mark> "MC-MKE: A Fine-Grained Multimodal Knowledge Editing Benchmark Emphasizing Modality Consistency". Zhang J, Zhang H, Yin X, et al.. arXiv 2024. [Paper] [Github].
Intelligence&Cognition
Intelligent Question Answering
- <mark>RAVEN</mark> "RAVEN: A Dataset for Relational and Analogical Visual rEasoNing". Zhang C, Gao F, Jia B, et al.. CVPR 2019. [Paper] [Github].
- <mark>MARVEL</mark> "MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning". Jiang Y, Zhang J, Sun K, et al.. arXiv 2024. [Paper] [Github].
- <mark>VCog-Bench</mark> "What is the Visual Cognition Gap between Humans and Multimodal LLMs?". Cao X, Lai B, Ye W, et al.. arXiv 2024. [Paper] [Github].
- <mark>M3GIA</mark> "M3GIA: A Cognition Inspired Multilingual and Multimodal General Intelligence Ability Benchmark". Song W, Li Y, Xu J, et al.. arXiv 2024. [Paper] [Github].
Mathematical Question Answering
- <mark>MathVista</mark> "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts". Lu P, Bansal H, Xia T, et al.. ICLR 2024. [Paper] [Github].
- <mark>MathVerse</mark> "MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?". ZhangR, JiangD, ZhangY, et al.. ECCV 2024. [Paper] [Github].
- <mark>NPHardEval4V</mark> "NPHardEval4V: A Dynamic Reasoning Benchmark of Multimodal Large Language Models". FanL, HuaW, LiX, et al.. arXiv 2024. [Paper] [Github].
- <mark>Math-Vision</mark> "Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset". WangK, PanJ, ShiW, et al.. arXiv 2024. [Paper] [Github].
- <mark>MATHCHECK-GEO</mark> "Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist". ZhouZ, LiuS, NingM, et al.. arXiv 2024. [Paper] [Github].
- <mark>Geometry3K</mark> "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". LuP, GongR, JiangS, et al.. ACL 2021. [Paper] [Github].
Multidisciplinary Question Answering
- <mark>M3Exam</mark> "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models". ZhangW, AljuniedS, GaoC, et al.. NeurIPS 2023. [Paper] [Github].
- <mark>CMMMU</mark> "CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark". ZhangG, DuX, ChenB, et al.. arXiv 2024. [Paper] [Github].
- <mark>ScienceQA</mark> "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering". LuP, MishraS, XiaT, et al.. NeurIPS 2022. [Paper] [Github].
- <mark>MMMU</mark> "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI". YueX, NiY, ZhangK, et al.. CVPR 2024. [Paper] [Github].
- <mark>CMMU</mark> "CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning". HeZ, WuX, ZhouP, et al.. arXiv 2024. [Paper] [Github].
- <mark>SceMQA</mark> "SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark". LiangZ, GuoK, LiuG, et al.. arXiv 2024. [Paper] [Github].
- <mark>MULTI</mark> "MULTI: Multimodal Understanding Leaderboard with Text and Images". ZhuZ, XuY, ChenL, et al.. arXiv 2024. [Paper] [Github].
Specific Domains
Text-rich VQA
Text-oriented Question Answering
- <mark>OCRBench</mark> "On the Hidden Mystery of OCR in Large Multimodal Models". LiuY, LiZ, HuangM, et al.. arXiv 2024. [Paper] [Github].
- <mark>P2GB</mark> "Plug-and-Play Grounding of Reasoning in Multimodal Large Language Models". ChenJ, LiuY, LiD, et al.. arXiv 2024. [Paper] [Github].
- <mark>TextVQA</mark> "Towards VQA Models That Can Read". SinghA, NatarajanV, ShahM, et al.. CVPR 2019. [Paper] [Github].
- <mark>TextCaps</mark> "TextCaps: a Dataset for Image Captioning with Reading Comprehension". SidorovO, HuR, RohrbachM, et al.. ECCV 2020. [Paper] [Github].
- <mark>SEED-Bench-2-Plus</mark> "SEED-Bench-2-Plus: Benchmarking Multimodal Large Language Models with Text-Rich Visual Comprehension". Bohao Li, Yuying Ge, Yi Chen, et al.. arXiv 2024. [Paper] [Github].
Document-oriented Question Answering
- <mark>SPDocVQA</mark> "Document Visual Question Answering Challenge 2020". Minesh Mathew, Ruben Tito, Dimosthenis Karatzas, et al.. DAS 2020. [Paper] [Github].
- <mark>MPDocVQA</mark> "Hierarchical multimodal transformers for Multi-Page DocVQA". Rubèn Tito, Dimosthenis Karatzas, Ernest Valveny. arXiv 2022. [Paper] [Github].
- <mark>InfographicVQA</mark> "Minesh Mathew and Viraj Bagal and Rubèn Pérez Tito and Dimosthenis Karatzas and Ernest Valveny and C. V Jawahar". Minesh Mathew, Viraj Bagal, Rubèn Pérez Tito, et al.. arXiv 2021. [Paper] [Github].
- <mark>DUDE</mark> "Document Understanding Dataset and Evaluation (DUDE)". Jordy Van Landeghem, Rubén Tito, Łukasz Borchmann, et al.. ICCV 2023. [Paper] [Github].
- <mark>MM-NIAH</mark> "Needle In A Multimodal Haystack". Weiyun Wang, Shuibo Zhang, Yiming Ren, et al.. arXiv 2024. [Paper] [Github].
Chart-oriented Question Answering
- <mark>ChartQA</mark> "ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning". Ahmed Masry, Do Xuan Long, Jia Qing Tan, et al.. ACL 2022. [Paper] [Github].
- <mark>ChartX</mark> "ChartX and ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning". Renqiu Xia, Bo Zhang, Hancheng Ye, et al.. arXiv 2024. [Paper] [Github].
- <mark>ChartBench</mark> "ChartBench: A Benchmark for Complex Visual Reasoning in Charts". Zhengzhuo Xu, Sinan Du, Yiyan Qi, et al.. arXiv 2023. [Paper] [Github].
- <mark>SciGraphQA</mark> "SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs". Shengzhi Li, Nima Tajbakhsh. arXiv 2023. [Paper] [Github].
- <mark>MMC-Benchmark</mark> "MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning". Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, et al.. NAACL 2024. [Paper] [Github].
- <mark>CharXiv</mark> "CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs". Zirui Wang, Mengzhou Xia, Luxi He, et al.. arXiv 2024. [Paper] [Github].
- <mark>CHOPINLLM</mark> "On Pre-training of Multimodal Language Models Customized for Chart Understanding". Wan-Cyuan Fan, Yen-Chun Chen, Mengchen Liu, et al.. arXiv 2024. [Paper] [Github].
- <mark>SciFIBench</mark> "SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation". Jonathan Roberts, Kai Han, Neil Houlsby, et al.. arXiv 2024. [Paper] [Github].
Html-oriented Question Answering
- <mark>Web2Code</mark> "Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs". Sukmin Yun, Haokun Lin, Rusiru Thushara, et al.. arXiv 2024. [Paper] [Github].
- <mark>VisualWebBench</mark> "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?". Junpeng Liu, Yifan Song, Bill Yuchen Lin, et al.. arXiv 2024. [Paper] [Github].
- <mark>Plot2Code</mark> "Plot2Code: A Comprehensive Benchmark for Evaluating Multi-modal Large Language Models in Code Generation from Scientific Plots". Chengyue Wu, Yixiao Ge, Qiushan Guo, et al.. arXiv 2024. [Paper] [Github].
Decision-making Agents
Embodied Decision-making
- <mark>VisualAgentBench</mark> "VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents". Xiao Liu, Tianjie Zhang, Yu Gu, et al.. arXiv 2024. [Paper] [Github].
- <mark>EgoPlan-Bench</mark> "EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning". Yi Chen, Yuying Ge, Yixiao Ge, et al.. arXiv 2023. [Paper] [Github].
- <mark>PCA-EVAL</mark> "Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond". Liang Chen, Yichi Zhang, Shuhuai Ren, et al.. arXiv 2023. [Paper] [Github].
- <mark>OpenEQA</mark> "OpenEQA: Embodied Question Answering in the Era of Foundation Models". Majumdar, Arjun and Ajay, Anurag and Zhang, et al.. CVPR 2024. [Paper] [Github].
- <mark>OSWorld</mark> "OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments". Tianbao Xie, Danyang Zhang, Jixuan Chenet al.. NeurIPS 2024. [Paper] [Github].
Mobile Agency
- <mark>Mobile-Eval</mark> "Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception". Junyang Wang, Haiyang Xu, Jiabo Ye, et al.. ICLR 2024. [Paper] [Github].
- <mark>Fereet-UI</mark> "Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs". You K, Zhang H, Schoop E, et al.. arXiv 2024. [Paper] [Github].
- <mark>CRAB</mark> "CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents". Tianqi Xu, Linyao Chen, Dai-Jie Wu, et al.. arXiv 2024. [Paper] [Github].
Diverse Cultures&Languages
- <mark>CMMU</mark> "CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning". Zheqi He, Xinya Wu, Pengfei Zhou, et al.. arXiv 2024. [Paper] [Github].
- <mark>Henna</mark> "Peacock: A Family of Arabic Multimodal Large Language Models and Benchmarks". Fakhraddin Alwajih, El Moatez Billah Nagoudi, Gagan Bhatia, et al.. arXiv 2024. [Paper] [Github].
- <mark>LaVy-Bench</mark> "LaVy: Vietnamese Multimodal Large Language Model". Chi Tran, Huong Le Thanh. arXiv 2024. [Paper] [Github].
- <mark>MTVQA</mark> "MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering". Jingqun Tang, Qi Liu, Yongjie Ye, et al.. arXiv 2024. [Paper] [Github].
- <mark>CVQA</mark> "CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark". David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, et al.. arXiv 2024. [Paper] [Github].
- <mark>CMMMU</mark> "CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark". Ge Zhang, Xinrun Du, Bei Chen, et al.. arXiv 2024. [Paper] [Github].
- <mark>MULTI</mark> "MULTI: Multimodal Understanding Leaderboard with Text and Images". Zichen Zhu, Yang Xu, Lu Chen, et al.. arXiv 2024. [Paper] [Github].
Other Applications
Geography and Remote Sensing
- <mark>LHRS-Bench</mark> "LHRS-Bot: Empowering Remote Sensing with VGI-Enhanced Large Multimodal Language Model". Dilxat Muhtar, Zhenshi Li, Feng Gu, et al.. arXiv 2024. [Paper] [Github].
- <mark>ChartingNewTerritories</mark> "Charting New Territories: Exploring the Geographic and Geospatial Capabilities of Multimodal LLMs". Jonathan Roberts, Timo Lüddecke, Rehan Sheikh, et al.. arXiv 2023. [Paper] [Github].
Medicine
- <mark>GMAI-MMBench</mark> "GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI". Pengcheng Chen, Jin Ye, Guoan Wang, et al.. arXiv 2024. [Paper] [Github].
- <mark>M3D</mark> "M3DBench: Let's Instruct Large Models with Multi-modal 3D Prompts". Mingsheng Li, Xin Chen, Chi Zhang, et al.. arXiv 2023. [Paper] [Github].
- <mark>Asclepius</mark> "Asclepius: A Spectrum Evaluation Benchmark for Medical Multi-Modal Large Language Models". Wenxuan Wang, Yihang Su, Jingyuan Huan, et al.. arXiv 2024. [Paper] [Github].
- <mark>MultiMed</mark> "MultiMed: Massively Multimodal and Multitask Medical Understanding". Shentong Mo, Paul Pu Liang. arXiv 2024. [Paper] [Github].
Society
- <mark>VizWiz</mark> "VizWiz Grand Challenge: Answering Visual Questions from Blind People". Danna Gurari, Qing Li, Abigale J. Stangl, et al.. arXiv 2018. [Paper] [Github].
- <mark>MM-Soc</mark> "MM-Soc: Benchmarking Multimodal Large Language Models in Social Media Platforms". Yiqiao Jin, Minje Choi, Gaurav Verma, et al.. ACL 2024. [Paper] [Github].
- <mark>TransportationGames</mark> "TransportationGames: Benchmarking Transportation Knowledge of (Multimodal) Large Language Models". Xue Zhang, Xiangyu Shi, Xinyue Lou, et al.. arXiv 2024. [Paper] [Github].
Industry
- <mark>MMRo</mark> "MMRo: Are Multimodal LLMs Eligible as the Brain for In-Home Robotics?". Jinming Li, Yichen Zhu, Zhiyuan Xu, et al.. arXiv 2024. [Paper] [Github].
- <mark>DesignQA</mark> "DesignQA: A Multimodal Benchmark for Evaluating Large Language Models' Understanding of Engineering Documentation". Anna C. Doris, Daniele Grandi, Ryan Tomich, et al.. arXiv 2024. [Paper] [Github].
Autonomous Driving
- <mark>NuScenes-QA</mark> "NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario". Tianwen Qian, Jingjing Chen, Linhai Zhuo, et al.. AAAI 2024. [Paper] [Github].
- <mark>DriveLM-DATA</mark> "DriveLM: Driving with Graph Visual Question Answering". Chonghao Sima, Katrin Renz, Kashyap Chitta, et al.. ECCV 2024. [Paper] [Github].
Key Capabilities
Conversation Abilities
Long-context
- <mark>Mile-Bench</mark> "MileBench: Benchmarking MLLMs in Long Context". Song D, Chen S, Chen G H, et al.. arXiv 2024. [Paper] [Github].
- <mark>MMNeedle</mark> "Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models". Wang H, Shi H, Tan S, et al.. arXiv 2024. [Paper] [Github].
- <mark>MLVU</mark> "MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding". Zhou J, Shu Y, Zhao B, et al.. arXiv 2024. [Paper] [Github].
Instruction Following
- <mark>CoIN</mark> "CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language Model". Chen C, Zhu J, Luo X, et al.. arXiv 2024. [Paper] [Github].
- <mark>MIA-Bench</mark> "MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs". Qian Y, Ye H, Fauconnier J P, et al.. arXiv 2024. [Paper] [Github].
- <mark>DEMON</mark> "Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions". Li J, Pan K, Ge Z, et al.. ICLR 2023. [Paper] [Github].
- <mark>VisIT-Bench</mark> "VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use". Bitton Y, Bansal H, Hessel J, et al.. NeurIPS 2023. [Paper] [Github].
Hallucination
-
<mark>POPE</mark> "Evaluating Object Hallucination in Large Vision-Language Models". Li Y, Du Y, Zhou K, et al.. EMNLP 2023. [Paper] [Github].
-
<mark>GAVIE</mark> "Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning". Liu F, Lin K, Li L, et al.. ICLR 2023. [Paper] [Github].
-
<mark>HaELM</mark> "Evaluation and Analysis of Hallucination in Large Vision-Language Models". Wang J, Zhou Y, Xu G, et al.. arXiv 2023. [Paper] [Github].
-
<mark>M-HalDetect</mark> "Detecting and Preventing Hallucinations in Large Vision Language Models". Gunjal A, Yin J, Bas E.. AAAI 2024. [Paper] [Github].
-
<mark>Bingo</mark> "Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges". Cui C, Zhou Y, Yang X, et al.. arXiv 2023. [Paper] [Github].
-
<mark>HallusionBench</mark> "HALLUSIONBENCH: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models". Guan T, Liu F, Wu X, et al.. CVPR 2024. [Paper] [Github].
-
<mark>VHTest</mark> "Visual Hallucinations of Multi-modal Large Language Models". Huang W, Liu H, Guo M, et al.. arXiv 2024. [Paper] [Github].
-
<mark>CorrelationQA</mark> "The Instinctive Bias: Spurious Images lead to Hallucination in MLLMs". Han T, Lian Q, Pan R, et al.. arXiv 2024. [Paper] [Github].
-
<mark>CHAIR</mark> "Object Hallucination in Image Captioning". Rohrbach A, Hendricks L A, Burns K, et al.. EMNLP 2018. [Paper] [Github].
-
<mark>MHaluBench</mark> "Unified Hallucination Detection for Multimodal Large Language Models". Chen X, Wang C, Xue Y, et al.. arXiv 2024. [Paper] [Github].
-
<mark>VideoHallucer</mark> "VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models". Wang Y, Wang Y, Zhao D, et al.. arXiv 2024. [Paper] [Github].
-
<mark>MMHAL-BENCH</mark> "Aligning Large Multimodal Models with Factually Augmented RLHF". Sun Z, Shen S, Cao S, et al.. arXiv 2023. [Paper] [Github].
-
<mark>AMBER</mark> "AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination Evaluation". Wang J, Wang Y, Xu G, et al.. arXiv 2023. [Paper] [Github].
-
<mark>MMECeption</mark> "GenCeption: Evaluate Multimodal LLMs with Unlabeled Unimodal Data". Cao L, Buchner V, Senane Z, et al.. arXiv 2024. [Paper] [Github].
Trustworthiness
Robustness
- <mark>MAD-Bench</mark> "How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts". Qian Y, Zhang H, Yang Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>MMR</mark> "Seeing Clearly, Answering Incorrectly: A Multimodal Robustness Benchmark for Evaluating MLLMs on Leading Questions". Liu Y, Liang Z, Wang Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>MM-SpuBench</mark> "MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs". Ye W, Zheng G, Ma Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>MM-SAP</mark> "MM-SAP: A Comprehensive Benchmark for Assessing Self-Awareness of Multimodal Large Language Models in Perception". Wang Y, Liao Y, Liu H, et al.. arXiv 2024. [Paper] [Github].
- <mark>BenchLMM</mark> "BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models". Cai R, Song Z, Guan D, et al.. arXiv 2023. [Paper] [Github].
- <mark>VQAv2-IDK</mark> "Visually Dehallucinative Instruction Generation: Know What You Don’t Know". Cha S, Lee J, Lee Y, et al.. ICASSP 2024. [Paper] [Github].
Safety
- <mark>MMUBench</mark> "Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models". Li J, Wei Q, Zhang C, et al.. arXiv 2024. [Paper] [Github].
- <mark>JailBreakV-28K</mark> "JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks". Luo W, Ma S, Liu X, et al.. arXiv 2024. [Paper] [Github].
- <mark>MultiTrust</mark> "Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study". Zhang Y, Huang Y, Sun Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>MM-SafetyBench</mark> "MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models". Liu X, Zhu Y, Gu J, et al.. ECCV 2024. [Paper] [Github].
- <mark>SHIELD</mark> "SHIELD: An Evaluation Benchmark for Face Spoofing and Forgery Detection with Multimodal Large Language Models". Shi Y, Gao Y, Lai Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>RTVLM</mark> "Red teaming visual language models". Li M, Li L, Yin Y, et al.. arXiv 2024. [Paper] [Github].
Other Modalities
Videos
Temporal Perception
- <mark>MVBench</mark> "MVBench: A Comprehensive Multi-modal Video Understanding Benchmark". Li K, Wang Y, He Y, et al.. CVPR 2024. [Paper] [Github].
- <mark>TimeIT</mark> "Timechat: A time-sensitive multimodal large language model for long video understanding". Ren S, Yao L, Li S, et al.. CVPR 2024. [Paper] [Github].
- <mark>ViLMA</mark> "ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models". Kesen I, Pedrotti A, Dogan M, et al.. ICLR 2024. [Paper] [Github].
- <mark>VITATECS</mark> "VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models". Li S, Li L, Ren S, et al.. arXiv 2023. [Paper] [Github].
- <mark>TempCompass</mark> "TempCompass: Do Video LLMs Really Understand Videos?". Liu Y, Li S, Liu Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>OSCaR</mark> "OSCaR: Object State Captioning and State Change Representation". Nguyen N, Bi J, Vosoughi A, et al.. arXiv 2024. [Paper] [Github].
- <mark>ADLMCQ</mark> "LLAVIDAL: Benchmarking Large Language Vision Models for Daily Activities of Living". Chakraborty R, Sinha A, Reilly D, et al.. arXiv 2024. [Paper] [Github].
- <mark>Perception Test</mark> "Perception Test: A Diagnostic Benchmark for Multimodal Video Models". Patraucean V, Smaira L, Gupta A, et al.. NeurIPS2024. [Paper] [Github].
Long Video Understanding
- <mark>MovieChat-1k</mark> "Moviechat: From dense token to sparse memory for long video understanding". **. . [Paper] [Github].
- <mark>EgoSchema</mark> "EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding". **. . [Paper] [Github].
- <mark>Event-Bench</mark> "Towards Event-oriented Long Video Understanding". **. . [Paper] [Github].
- <mark>MLVU</mark> "MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding". **. . [Paper] [Github].
Comprehensive Evaluation
- <mark>Video-Bench</mark> "Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models". Ning M, Zhu B, Xie Y, et al.. arXiv 2023. [Paper] [Github].
- <mark>MMBench-Video</mark> "MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding". Fang X, Mao K, Duan H, et al.. arXiv 2024. [Paper] [Github].
- <mark>Video-MME</mark> "Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis". Fu C, Dai Y, Luo Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>AutoEval-Video</mark> "AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering". Chen X, Lin Y, Zhang Y, et al.. arXiv 2023. [Paper] [Github].
- <mark>MMWorld</mark> "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos". He X, Feng W, Zheng K, et al.. arXiv 2024. [Paper] [Github].
- <mark>WorldNet</mark> "WorldGPT: Empowering LLM as Multimodal World Model". Ge Z, Huang H, Zhou M, et al.. arXiv 2024. [Paper] [Github].
Audio
- <mark>Dynamic-SUPERB</mark> "Dynamic-superb: Towards a dynamic, collaborative, and comprehensive instruction-tuning benchmark for speech". Huang C, Lu K H, Wang S H, et al.. ICASSP 2024. [Paper] [Github].
- <mark>MuChoMusic</mark> "MuChoMusic: Evaluating Music Understanding in Multimodal Audio-Language Models". Weck B, Manco I, Benetos E, et al.. arXiv 2024. [Paper] [Github].
- <mark>AIR-Bench</mark> "AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension". Yang Q, Xu J, Liu W, et al.. arXiv 2024. [Paper] [Github].
3D Points
- <mark>ScanQA</mark> "ScanQA: 3D Question Answering for Spatial Scene Understanding". Azuma D, Miyanishi T, Kurita S, et al.. CVPR 2022. [Paper] [Github].
- <mark>ScanReason</mark> "ScanReason: Empowering 3D Visual Grounding with Reasoning Capabilities". Zhu C, Wang T, Zhang W, et al.. arXiv 2024. [Paper] [Github].
- <mark>LAMM</mark> "LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark". Yin Z, Wang J, Cao J, et al.. NeurIPS 2024. [Paper] [Github].
- <mark>SpatialRGPT</mark> "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model". Cheng A C, Yin H, Fu Y, et al.. arXiv 2024. [Paper] [Github].
- <mark>M3DBench</mark> "M3DBench: Let’s Instruct Large Models with Multi-modal 3D Prompts". Li M, Chen X, Zhang C, et al.. arXiv 2023. [Paper] [Github].
Omni-modal
- <mark>MCUB</mark> "Model Composition for Multimodal Large Language Models". Chen C, Du Y, Fang Z, et al.. arXiv 2024. [Paper] [Github].
- <mark>AVQA</mark> "AVQA: A Dataset for Audio-Visual Question Answering on Videos". Yang P, Wang X, Duan X, et al.. MM 2022. [paper] [Github].
- <mark>MusicAVQA</mark> "Learning to Answer Questions in Dynamic Audio-Visual Scenarios". Li G, Wei Y, Tian Y, et al.. CVPR 2022. [Paper] [Github].
- <mark>MMT-Bench</mark> "MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI". Ying K, Meng F, Wang J, et al.. arXiv 2024. [paper] [Github].