Awesome
Awesome-Multimodal-Large-Language-Models
Our MLLM works
π₯π₯π₯ A Survey on Multimodal Large Language Models
Project Page [This Page] | Paper
The first comprehensive survey for Multimodal Large Language Models (MLLMs). :sparkles: </div>
Welcome to add WeChat ID (wmd_ustc) to join our MLLM communication group! :star2: </div>
π₯π₯π₯ VITA: Towards Open-Source Interactive Omni Multimodal LLM
<p align="center">
<img src="./images/vita.png" width="80%" height="80%">
</p>
<font size=7><div align='center' > [π Project Page] [π arXiv Paper] [πΌ GitHub] </div></font>
[2024.08.12] We are announcing VITA, the first-ever open-source Multimodal LLM that can process Video, Image, Text, and Audio, and meanwhile has an advanced multimodal interactive experience. π
<b>Omni Multimodal Understanding</b>. VITA demonstrates robust foundational capabilities of multilingual, vision, and audio understanding, as evidenced by its strong performance across a range of both unimodal and multimodal benchmarks. β¨
<b>Non-awakening Interaction</b>. VITA can be activated and respond to user audio questions in the environment without the need for a wake-up word or button. β¨
<b>Audio Interrupt Interaction</b>. VITA is able to simultaneously track and filter external queries in real-time. This allows users to interrupt the model's generation at any time with new questions, and VITA will respond to the new query accordingly. β¨
π₯π₯π₯ Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
<p align="center">
<img src="./images/videomme.jpg" width="80%" height="80%">
</p>
<font size=7><div align='center' > [π Project Page] [π arXiv Paper] [π Dataset][π Leaderboard] </div></font>
[2024.06.03] We are very proud to launch Video-MME, the first-ever comprehensive evaluation benchmark of MLLMs in Video Analysis! π
It applies to both <b>image MLLMs</b>, i.e., generalizing to multiple images, and <b>video MLLMs</b>. Our leaderboard involes SOTA models like Gemini 1.5 Pro, GPT-4o, GPT-4V, LLaVA-NeXT-Video, InternVL-Chat-V1.5, and Qwen-VL-Max. π
It includes both <b>short- (< 2min)</b>, <b>medium- (4min~15min)</b>, and <b>long-term (30min~60min)</b> videos, ranging from <b>11 seconds to 1 hour</b>. β¨
<b>All data are newly collected and annotated by humans, not from any existing video dataset</b>. β¨
π₯π₯π₯ MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Project Page [Leaderboards] | Paper | :black_nib: Citation
A comprehensive evaluation benchmark for MLLMs. Now the leaderboards include 50+ advanced models, such as Qwen-VL-Max, Gemini Pro, and GPT-4V. :sparkles:
If you want to add your model in our leaderboards, please feel free to email bradyfu24@gmail.com. We will update the leaderboards in time. :sparkles:
<details><summary>Download MME :star2::star2: </summary>
The benchmark dataset is collected by Xiamen University for academic research only. You can email yongdongluo@stu.xmu.edu.cn to obtain the dataset, according to the following requirement.
Requirement: A real-name system is encouraged for better academic communication. Your email suffix needs to match your affiliation, such as xx@stu.xmu.edu.cn and Xiamen University. Otherwise, you need to explain why. Please include the information bellow when sending your application email.
Name: (tell us who you are.)
Affiliation: (the name/url of your university or company)
Job Title: (e.g., professor, PhD, and researcher)
Email: (your email address)
How to use: (only for non-commercial use)
</details>
<br> π If you find our projects helpful to your research, please consider citing: <br>
@article{fu2023mme,
title={MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models},
author={Fu, Chaoyou and Chen, Peixian and Shen, Yunhang and Qin, Yulei and Zhang, Mengdan and Lin, Xu and Yang, Jinrui and Zheng, Xiawu and Li, Ke and Sun, Xing and others},
journal={arXiv preprint arXiv:2306.13394},
year={2023}
}
@article{fu2024vita,
title={VITA: Towards Open-Source Interactive Omni Multimodal LLM},
author={Fu, Chaoyou and Lin, Haojia and Long, Zuwei and Shen, Yunhang and Zhao, Meng and Zhang, Yifan and Wang, Xiong and Yin, Di and Ma, Long and Zheng, Xiawu and He, Ran and Ji, Rongrong and Wu, Yunsheng and Shan, Caifeng and Sun, Xing},
journal={arXiv preprint arXiv:2408.05211},
year={2024}
}
@article{fu2024video,
title={Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis},
author={Fu, Chaoyou and Dai, Yuhan and Luo, Yondong and Li, Lei and Ren, Shuhuai and Zhang, Renrui and Wang, Zihan and Zhou, Chenyu and Shen, Yunhang and Zhang, Mengdan and others},
journal={arXiv preprint arXiv:2405.21075},
year={2024}
}
@article{yin2023survey,
title={A survey on multimodal large language models},
author={Yin, Shukang and Fu, Chaoyou and Zhao, Sirui and Li, Ke and Sun, Xing and Xu, Tong and Chen, Enhong},
journal={arXiv preprint arXiv:2306.13549},
year={2023}
}
<font size=5><center><b> Table of Contents </b> </center></font>
Awesome Papers
Multimodal Instruction Tuning
Title | Venue | Date | Code | Demo |
---|
<br> mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models <br> | arXiv | 2024-08-09 | Github | - |
<br> VITA: Towards Open-Source Interactive Omni Multimodal LLM <br> | arXiv | 2024-08-09 | Github | - |
<br> LLaVA-OneVision: Easy Visual Task Transfer <br> | arXiv | 2024-08-06 | Github | Demo |
<br> MiniCPM-V: A GPT-4V Level MLLM on Your Phone <br> | arXiv | 2024-08-03 | Github | Demo |
VILA^2: VILA Augmented VILA | arXiv | 2024-07-24 | - | - |
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models | arXiv | 2024-07-22 | - | - |
EVLM: An Efficient Vision-Language Model for Visual Understanding | arXiv | 2024-07-19 | - | - |
<br> InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output <br> | arXiv | 2024-07-03 | Github | Demo |
<br> OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding <br> | arXiv | 2024-06-27 | Github | Local Demo |
<br> Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs <br> | arXiv | 2024-06-24 | Github | Local Demo |
<br> Long Context Transfer from Language to Vision <br> | arXiv | 2024-06-24 | Github | Local Demo |
<br> Unveiling Encoder-Free Vision-Language Models <br> | arXiv | 2024-06-17 | Github | Local Demo |
<br> Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models <br> | arXiv | 2024-06-12 | Github | - |
<br> VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs <br> | arXiv | 2024-06-11 | Github | Local Demo |
<br> Parrot: Multilingual Visual Instruction Tuning <br> | arXiv | 2024-06-04 | Github | - |
<br> Ovis: Structural Embedding Alignment for Multimodal Large Language Model <br> | arXiv | 2024-05-31 | Github | - |
<br> Matryoshka Query Transformer for Large Vision-Language Models <br> | arXiv | 2024-05-29 | Github | Demo |
<br> ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models <br> | arXiv | 2024-05-24 | Github | - |
<br> Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models <br> | arXiv | 2024-05-24 | Github | Demo |
<br> Libra: Building Decoupled Vision System on Large Language Models <br> | ICML | 2024-05-16 | Github | Local Demo |
<br> CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts <br> | arXiv | 2024-05-09 | Github | Local Demo |
<br> How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites <br> | arXiv | 2024-04-25 | Github | Demo |
<br> Graphic Design with Large Multimodal Model <br> | arXiv | 2024-04-22 | Github | - |
<br> InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD <br> | arXiv | 2024-04-09 | Github | Demo |
Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs | arXiv | 2024-04-08 | - | - |
<br> MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding <br> | CVPR | 2024-04-08 | Github | - |
TOMGPT: Reliable Text-Only Training Approach for Cost-Effective Multi-modal Large Language Model | ACM TKDD | 2024-03-28 | - | - |
<br> Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models <br> | arXiv | 2024-03-27 | Github | Demo |
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training | arXiv | 2024-03-14 | - | - |
<br> MoAI: Mixture of All Intelligence for Large Language and Vision Models <br> | arXiv | 2024-03-12 | Github | Local Demo |
<br> TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document <br> | arXiv | 2024-03-07 | Github | Demo |
<br> The All-Seeing Project V2: Towards General Relation Comprehension of the Open World | arXiv | 2024-02-29 | Github | - |
GROUNDHOG: Grounding Large Language Models to Holistic Segmentation | CVPR | 2024-02-26 | Coming soon | Coming soon |
<br> AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling <br> | arXiv | 2024-02-19 | Github | - |
<br> Momentor: Advancing Video Large Language Model with Fine-Grained Temporal Reasoning <br> | arXiv | 2024-02-18 | Github | - |
<br> ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model <br> | arXiv | 2024-02-18 | Github | Demo |
<br> CoLLaVO: Crayon Large Language and Vision mOdel <br> | arXiv | 2024-02-17 | Github | - |
<br> CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations <br> | arXiv | 2024-02-06 | Github | - |
<br> MobileVLM V2: Faster and Stronger Baseline for Vision Language Model <br> | arXiv | 2024-02-06 | Github | - |
Enhancing Multimodal Large Language Models with Vision Detection Models: An Empirical Study | arXiv | 2024-01-31 | Coming soon | - |
<br> LLaVA-NeXT: Improved reasoning, OCR, and world knowledge | Blog | 2024-01-30 | Github | Demo |
<br> MoE-LLaVA: Mixture of Experts for Large Vision-Language Models <br> | arXiv | 2024-01-29 | Github | Demo |
<br> InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model <br> | arXiv | 2024-01-29 | Github | Demo |
<br> Yi-VL <br> | - | 2024-01-23 | Github | Local Demo |
SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities | arXiv | 2024-01-22 | - | - |
<br> MobileVLM : A Fast, Reproducible and Strong Vision Language Assistant for Mobile Devices <br> | arXiv | 2023-12-28 | Github | - |
<br> InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks <br> | CVPR | 2023-12-21 | Github | Demo |
<br> Osprey: Pixel Understanding with Visual Instruction Tuning <br> | CVPR | 2023-12-15 | Github | Demo |
<br> CogAgent: A Visual Language Model for GUI Agents <br> | arXiv | 2023-12-14 | Github | Coming soon |
Pixel Aligned Language Models | arXiv | 2023-12-14 | Coming soon | - |
See, Say, and Segment: Teaching LMMs to Overcome False Premises | arXiv | 2023-12-13 | Coming soon | - |
<br> Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models <br> | arXiv | 2023-12-11 | Github | Demo |
<br> Honeybee: Locality-enhanced Projector for Multimodal LLM <br> | arXiv | 2023-12-11 | Github | - |
Gemini: A Family of Highly Capable Multimodal Models | Google | 2023-12-06 | - | - |
<br> OneLLM: One Framework to Align All Modalities with Language <br> | arXiv | 2023-12-06 | Github | Demo |
<br> Lenna: Language Enhanced Reasoning Detection Assistant <br> | arXiv | 2023-12-05 | Github | - |
VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding | arXiv | 2023-12-04 | - | - |
<br> TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding <br> | arXiv | 2023-12-04 | Github | Local Demo |
<br> Making Large Multimodal Models Understand Arbitrary Visual Prompts <br> | CVPR | 2023-12-01 | Github | Demo |
<br> Dolphins: Multimodal Language Model for Driving <br> | arXiv | 2023-12-01 | Github | - |
<br> LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning <br> | arXiv | 2023-11-30 | Github | Coming soon |
<br> VTimeLLM: Empower LLM to Grasp Video Moments <br> | arXiv | 2023-11-30 | Github | Local Demo |
<br> mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model <br> | arXiv | 2023-11-30 | Github | - |
<br> LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models <br> | arXiv | 2023-11-28 | Github | Coming soon |
<br> LLMGA: Multimodal Large Language Model based Generation Assistant <br> | arXiv | 2023-11-27 | Github | Demo |
<br> ChartLlama: A Multimodal LLM for Chart Understanding and Generation <br> | arXiv | 2023-11-27 | Github | - |
<br> ShareGPT4V: Improving Large Multi-Modal Models with Better Captions <br> | arXiv | 2023-11-21 | Github | Demo |
<br> LION : Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge <br> | arXiv | 2023-11-20 | Github | - |
<br> An Embodied Generalist Agent in 3D World <br> | arXiv | 2023-11-18 | Github | Demo |
<br> Video-LLaVA: Learning United Visual Representation by Alignment Before Projection <br> | arXiv | 2023-11-16 | Github | Demo |
<br> Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding <br> | CVPR | 2023-11-14 | Github | - |
<br> To See is to Believe: Prompting GPT-4V for Better Visual Instruction Tuning <br> | arXiv | 2023-11-13 | Github | - |
<br> SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models <br> | arXiv | 2023-11-13 | Github | Demo |
<br> Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models <br> | CVPR | 2023-11-11 | Github | Demo |
<br> LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents <br> | arXiv | 2023-11-09 | Github | Demo |
<br> NExT-Chat: An LMM for Chat, Detection and Segmentation <br> | arXiv | 2023-11-08 | Github | Local Demo |
<br> mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration <br> | arXiv | 2023-11-07 | Github | Demo |
<br> OtterHD: A High-Resolution Multi-modality Model <br> | arXiv | 2023-11-07 | Github | - |
CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding | arXiv | 2023-11-06 | Coming soon | - |
<br> GLaMM: Pixel Grounding Large Multimodal Model <br> | CVPR | 2023-11-06 | Github | Demo |
<br> What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning <br> | arXiv | 2023-11-02 | Github | - |
<br> MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning <br> | arXiv | 2023-10-14 | Github | Local Demo |
<br> Ferret: Refer and Ground Anything Anywhere at Any Granularity <br> | arXiv | 2023-10-11 | Github | - |
<br> CogVLM: Visual Expert For Large Language Models <br> | arXiv | 2023-10-09 | Github | Demo |
<br> Improved Baselines with Visual Instruction Tuning <br> | arXiv | 2023-10-05 | Github | Demo |
<br> LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment <br> | ICLR | 2023-10-03 | Github | Demo |
<br> Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs | arXiv | 2023-10-01 | Github | - |
<br> Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants <br> | arXiv | 2023-10-01 | Github | Local Demo |
AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model | arXiv | 2023-09-27 | - | - |
<br> InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition <br> | arXiv | 2023-09-26 | Github | Local Demo |
<br> DreamLLM: Synergistic Multimodal Comprehension and Creation <br> | ICLR | 2023-09-20 | Github | Coming soon |
An Empirical Study of Scaling Instruction-Tuned Large Multimodal Models | arXiv | 2023-09-18 | Coming soon | - |
<br> TextBind: Multi-turn Interleaved Multimodal Instruction-following <br> | arXiv | 2023-09-14 | Github | Demo |
<br> NExT-GPT: Any-to-Any Multimodal LLM <br> | arXiv | 2023-09-11 | Github | Demo |
<br> Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics <br> | arXiv | 2023-09-13 | Github | - |
<br> ImageBind-LLM: Multi-modality Instruction Tuning <br> | arXiv | 2023-09-07 | Github | Demo |
Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning | arXiv | 2023-09-05 | - | - |
<br> PointLLM: Empowering Large Language Models to Understand Point Clouds <br> | arXiv | 2023-08-31 | Github | Demo |
<br> β¨Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models <br> | arXiv | 2023-08-31 | Github | Local Demo |
<br> MLLM-DataEngine: An Iterative Refinement Approach for MLLM <br> | arXiv | 2023-08-25 | Github | - |
<br> Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models <br> | arXiv | 2023-08-25 | Github | Demo |
<br> Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities <br> | arXiv | 2023-08-24 | Github | Demo |
<br> Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages <br> | ICLR | 2023-08-23 | Github | Demo |
<br> StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data <br> | arXiv | 2023-08-20 | Github | - |
<br> BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions <br> | arXiv | 2023-08-19 | Github | Demo |
<br> Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions <br> | arXiv | 2023-08-08 | Github | - |
<br> The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World <br> | ICLR | 2023-08-03 | Github | Demo |
<br> LISA: Reasoning Segmentation via Large Language Model <br> | arXiv | 2023-08-01 | Github | Demo |
<br> MovieChat: From Dense Token to Sparse Memory for Long Video Understanding <br> | arXiv | 2023-07-31 | Github | Local Demo |
<br> 3D-LLM: Injecting the 3D World into Large Language Models <br> | arXiv | 2023-07-24 | Github | - |
ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning <br> | arXiv | 2023-07-18 | - | Demo |
<br> BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs <br> | arXiv | 2023-07-17 | Github | Demo |
<br> SVIT: Scaling up Visual Instruction Tuning <br> | arXiv | 2023-07-09 | Github | - |
<br> GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest <br> | arXiv | 2023-07-07 | Github | Demo |
<br> What Matters in Training a GPT4-Style Language Model with Multimodal Inputs? <br> | arXiv | 2023-07-05 | Github | - |
<br> mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding <br> | arXiv | 2023-07-04 | Github | Demo |
<br> Visual Instruction Tuning with Polite Flamingo <br > | arXiv | 2023-07-03 | Github | Demo |
<br> LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding <br> | arXiv | 2023-06-29 | Github | Demo |
<br> Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic <br> | arXiv | 2023-06-27 | Github | Demo |
<br> MotionGPT: Human Motion as a Foreign Language <br> | arXiv | 2023-06-26 | Github | - |
<br> Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration <br> | arXiv | 2023-06-15 | Github | Coming soon |
<br> LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark <br> | arXiv | 2023-06-11 | Github | Demo |
<br> Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models <br> | arXiv | 2023-06-08 | Github | Demo |
<br> MIMIC-IT: Multi-Modal In-Context Instruction Tuning <br> | arXiv | 2023-06-08 | Github | Demo |
M<sup>3</sup>IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning | arXiv | 2023-06-07 | - | - |
<br> Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding <br> | arXiv | 2023-06-05 | Github | Demo |
<br> LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day <br> | arXiv | 2023-06-01 | Github | - |
<br> GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction <br> | arXiv | 2023-05-30 | Github | Demo |
<br> PandaGPT: One Model To Instruction-Follow Them All <br> | arXiv | 2023-05-25 | Github | Demo |
<br> ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst <br> | arXiv | 2023-05-25 | Github | - |
<br> Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models <br> | arXiv | 2023-05-24 | Github | Local Demo |
<br> DetGPT: Detect What You Need via Reasoning <br> | arXiv | 2023-05-23 | Github | Demo |
<br> Pengi: An Audio Language Model for Audio Tasks <br> | NeurIPS | 2023-05-19 | Github | - |
<br> VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks <br> | arXiv | 2023-05-18 | Github | - |
<br> Listen, Think, and Understand <br> | arXiv | 2023-05-18 | Github | Demo |
<br> VisualGLM-6B <br> | - | 2023-05-17 | Github | Local Demo |
<br> PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering <br> | arXiv | 2023-05-17 | Github | - |
<br> InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning <br> | arXiv | 2023-05-11 | Github | Local Demo |
<br> VideoChat: Chat-Centric Video Understanding <br> | arXiv | 2023-05-10 | Github | Demo |
<br> MultiModal-GPT: A Vision and Language Model for Dialogue with Humans <br> | arXiv | 2023-05-08 | Github | Demo |
<br> X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages <br> | arXiv | 2023-05-07 | Github | - |
<br> LMEye: An Interactive Perception Network for Large Language Models <br> | arXiv | 2023-05-05 | Github | Local Demo |
<br> LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model <br> | arXiv | 2023-04-28 | Github | Demo |
<br> mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality <br> | arXiv | 2023-04-27 | Github | Demo |
<br> MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models <br> | arXiv | 2023-04-20 | Github | - |
<br> Visual Instruction Tuning <br> | NeurIPS | 2023-04-17 | GitHub | Demo |
<br> LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention <br> | ICLR | 2023-03-28 | Github | Demo |
<br> MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning <br> | ACL | 2022-12-21 | Github | - |
Multimodal Hallucination
Multimodal In-Context Learning
Multimodal Chain-of-Thought
LLM-Aided Visual Reasoning
Foundation Models
Evaluation
Multimodal RLHF
Others
Awesome Datasets
Datasets of Pre-Training for Alignment
Datasets of Multimodal Instruction Tuning
Name | Paper | Link | Notes |
---|
VEGA | VEGA: Learning Interleaved Image-Text Comprehension in Vision-Language Large Models | Link | A dataset for enchancing model capabilities in comprehension of interleaved information |
ALLaVA-4V | ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model | Link | Vision and language caption and instruction dataset generated by GPT4V |
IDK | Visually Dehallucinative Instruction Generation: Know What You Don't Know | Link | Dehallucinative visual instruction for "I Know" hallucination |
CAP2QA | Visually Dehallucinative Instruction Generation | Link | Image-aligned visual instruction dataset |
M3DBench | M3DBench: Let's Instruct Large Models with Multi-modal 3D Prompts | Link | A large-scale 3D instruction tuning dataset |
ViP-LLaVA-Instruct | Making Large Multimodal Models Understand Arbitrary Visual Prompts | Link | A mixture of LLaVA-1.5 instruction data and the region-level visual prompting data |
LVIS-Instruct4V | To See is to Believe: Prompting GPT-4V for Better Visual Instruction Tuning | Link | A visual instruction dataset via self-instruction from GPT-4V |
ComVint | What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning | Link | A synthetic instruction dataset for complex visual reasoning |
SparklesDialogue | β¨Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models | Link | A machine-generated dialogue dataset tailored for word-level interleaved multi-image and text interactions to augment the conversational competence of instruction-following LLMs across multiple images and dialogue turns. |
StableLLaVA | StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data | Link | A cheap and effective approach to collect visual instruction tuning data |
M-HalDetect | Detecting and Preventing Hallucinations in Large Vision Language Models | Coming soon | A dataset used to train and benchmark models for hallucination detection and prevention |
MGVLID | ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning | - | A high-quality instruction-tuning dataset including image-text and region-text pairs |
BuboGPT | BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs | Link | A high-quality instruction-tuning dataset including audio-text audio caption data and audio-image-text localization data |
SVIT | SVIT: Scaling up Visual Instruction Tuning | Link | A large-scale dataset with 4.2M informative visual instruction tuning data, including conversations, detailed descriptions, complex reasoning and referring QAs |
mPLUG-DocOwl | mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding | Link | An instruction tuning dataset featuring a wide range of visual-text understanding tasks including OCR-free document understanding |
PF-1M | Visual Instruction Tuning with Polite Flamingo | Link | A collection of 37 vision-language datasets with responses rewritten by Polite Flamingo. |
ChartLlama | ChartLlama: A Multimodal LLM for Chart Understanding and Generation | Link | A multi-modal instruction-tuning dataset for chart understanding and generation |
LLaVAR | LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | Link | A visual instruction-tuning dataset for Text-rich Image Understanding |
MotionGPT | MotionGPT: Human Motion as a Foreign Language | Link | A instruction-tuning dataset including multiple human motion-related tasks |
LRV-Instruction | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Link | Visual instruction tuning dataset for addressing hallucination issue |
Macaw-LLM | Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration | Link | A large-scale multi-modal instruction dataset in terms of multi-turn dialogue |
LAMM-Dataset | LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark | Link | A comprehensive multi-modal instruction tuning dataset |
Video-ChatGPT | Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models | Link | 100K high-quality video instruction dataset |
MIMIC-IT | MIMIC-IT: Multi-Modal In-Context Instruction Tuning | Link | Multimodal in-context instruction tuning |
M<sup>3</sup>IT | M<sup>3</sup>IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning | Link | Large-scale, broad-coverage multimodal instruction tuning dataset |
LLaVA-Med | LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day | Coming soon | A large-scale, broad-coverage biomedical instruction-following dataset |
GPT4Tools | GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction | Link | Tool-related instruction datasets |
MULTIS | ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst | Coming soon | Multimodal instruction tuning dataset covering 16 multimodal tasks |
DetGPT | DetGPT: Detect What You Need via Reasoning | Link | Instruction-tuning dataset with 5000 images and around 30000 query-answer pairs |
PMC-VQA | PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering | Coming soon | Large-scale medical visual question-answering dataset |
VideoChat | VideoChat: Chat-Centric Video Understanding | Link | Video-centric multimodal instruction dataset |
X-LLM | X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages | Link | Chinese multimodal instruction dataset |
LMEye | LMEye: An Interactive Perception Network for Large Language Models | Link | A multi-modal instruction-tuning dataset |
cc-sbu-align | MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models | Link | Multimodal aligned dataset for improving model's usability and generation's fluency |
LLaVA-Instruct-150K | Visual Instruction Tuning | Link | Multimodal instruction-following data generated by GPT |
MultiInstruct | MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning | Link | The first multimodal instruction tuning benchmark dataset |
Datasets of In-Context Learning
Datasets of Multimodal Chain-of-Thought
Datasets of Multimodal RLHF
Benchmarks for Evaluation
Name | Paper | Link | Notes |
---|
CharXiv | CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs | Link | Chart understanding benchmark curated by human experts |
Video-MME | Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis | Link | A comprehensive evaluation benchmark of Multi-modal LLMs in video analysis |
VL-ICL Bench | VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning | Link | A benchmark for M-ICL evaluation, covering a wide spectrum of tasks |
TempCompass | TempCompass: Do Video LLMs Really Understand Videos? | Link | A benchmark to evaluate the temporal perception ability of Video LLMs |
CoBSAT | Can MLLMs Perform Text-to-Image In-Context Learning? | Link | A benchmark for text-to-image ICL |
VQAv2-IDK | Visually Dehallucinative Instruction Generation: Know What You Don't Know | Link | A benchmark for assessing "I Know" visual hallucination |
Math-Vision | Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset | Link | A diverse mathematical reasoning benchmark |
CMMMU | CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark | Link | A Chinese benchmark involving reasoning and knowledge across multiple disciplines |
MMCBench | Benchmarking Large Multimodal Models against Common Corruptions | Link | A benchmark for examining self-consistency under common corruptions |
MMVP | Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs | Link | A benchmark for assessing visual capabilities |
TimeIT | TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding | Link | A video instruction-tuning dataset with timestamp annotations, covering diverse time-sensitive video-understanding tasks. |
ViP-Bench | Making Large Multimodal Models Understand Arbitrary Visual Prompts | Link | A benchmark for visual prompts |
M3DBench | M3DBench: Let's Instruct Large Models with Multi-modal 3D Prompts | Link | A 3D-centric benchmark |
Video-Bench | Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models | Link | A benchmark for video-MLLM evaluation |
Charting-New-Territories | Charting New Territories: Exploring the Geographic and Geospatial Capabilities of Multimodal LLMs | Link | A benchmark for evaluating geographic and geospatial capabilities |
MLLM-Bench | MLLM-Bench, Evaluating Multi-modal LLMs using GPT-4V | Link | GPT-4V evaluation with per-sample criteria |
BenchLMM | BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models | Link | A benchmark for assessment of the robustness against different image styles |
MMC-Benchmark | MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning | Link | A comprehensive human-annotated benchmark with distinct tasks evaluating reasoning capabilities over charts |
MVBench | MVBench: A Comprehensive Multi-modal Video Understanding Benchmark | Link | A comprehensive multimodal benchmark for video understanding |
Bingo | Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges | Link | A benchmark for hallucination evaluation that focuses on two common types |
MagnifierBench | OtterHD: A High-Resolution Multi-modality Model | Link | A benchmark designed to probe models' ability of fine-grained perception |
HallusionBench | HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models | Link | An image-context reasoning benchmark for evaluation of hallucination |
PCA-EVAL | Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond | Link | A benchmark for evaluating multi-domain embodied decision-making. |
MMHal-Bench | Aligning Large Multimodal Models with Factually Augmented RLHF | Link | A benchmark for hallucination evaluation |
MathVista | MathVista: Evaluating Math Reasoning in Visual Contexts with GPT-4V, Bard, and Other Large Multimodal Models | Link | A benchmark that challenges both visual and math reasoning capabilities |
SparklesEval | β¨Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models | Link | A GPT-assisted benchmark for quantitatively assessing a model's conversational competence across multiple images and dialogue turns based on three distinct criteria. |
ISEKAI | Link-Context Learning for Multimodal LLMs | Link | A benchmark comprising exclusively of unseen generated image-label pairs designed for link-context learning |
M-HalDetect | Detecting and Preventing Hallucinations in Large Vision Language Models | Coming soon | A dataset used to train and benchmark models for hallucination detection and prevention |
I4 | Empowering Vision-Language Models to Follow Interleaved Vision-Language Instructions | Link | A benchmark to comprehensively evaluate the instruction following ability on complicated interleaved vision-language instructions |
SciGraphQA | SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs | Link | A large-scale chart-visual question-answering dataset |
MM-Vet | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | Link | An evaluation benchmark that examines large multimodal models on complicated multimodal tasks |
SEED-Bench | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Link | A benchmark for evaluation of generative comprehension in MLLMs |
MMBench | MMBench: Is Your Multi-modal Model an All-around Player? | Link | A systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models |
Lynx | What Matters in Training a GPT4-Style Language Model with Multimodal Inputs? | Link | A comprehensive evaluation benchmark including both image and video tasks |
GAVIE | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Link | A benchmark to evaluate the hallucination and instruction following ability |
MME | MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models | Link | A comprehensive MLLM Evaluation benchmark |
LVLM-eHub | LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models | Link | An evaluation platform for MLLMs |
LAMM-Benchmark | LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark | Link | A benchmark for evaluating the quantitative performance of MLLMs on various2D/3D vision tasks |
M3Exam | M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models | Link | A multilingual, multimodal, multilevel benchmark for evaluating MLLM |
OwlEval | mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality | Link | Dataset for evaluation on multiple capabilities |
Others