Home

Awesome

MoVA: Adapting Mixture of Vision Experts to Multimodal Context

Official repository for the paper "MoVA: Adapting Mixture of Vision Experts to Multimodal Context".

[πŸ“– Paper] [πŸ€— Huggingface Model]

πŸ’₯ News

πŸ‘€ About MoVA

To alleviate the bias of CLIP vision encoder, we first delve into the inherent behavior of different pre-trained vision encoders and then propose the MoVA, a powerful and novel MLLM, adaptively routing and fusing task-specific vision experts with a coarse-to-fine mechanism.

demo

MoVA consists of two stages: coarse-grained context-ware expert routing and fine-grained expert fusion with MoV-Adapter.

  1. Coarse-grained context-ware expert routing: First, MoVA leverages the tool-use capabilities of LLM and aims to employ LLM to select vision experts with strong relevance to the user's image and instruction from the expert model pool. Thanks to the strong generalization ability of LLM, we also can perform model routing for vision experts in open scenarios.

  2. Fine-grained expert fusion with MoV-Adapter: In the second stage, we turn to enhance the visual representation with a novel MoV-Adapter module in a fine-grained manner. More specifically, we leverage the cross-attention mechanism to extract the task-specific knowledge of representations from chosen experts. Meanwhile, the dynamic gating network in MoV-Adapter can allocate soft weights to the extracted knowledge of each expert according to the input image and instruction. Then the extracted knowledge can be effectively integrated into the foundational representation of the base vision encoder.

MoVA with Vicuna-7B, Llama3-8B and Hermes-Yi-34B can achieve significant performance gains over current state-of-the-art methods in a wide range of challenging benchmarks.

πŸ€– Model Zoo

MultiModal Benchmark

NameLLM#TokensMMEMMBenchMMBench-CNQBench<br>(dev)MathVistaMathVersePOPE
MoVA-8BLlama3-8B5761595.8 / 347.575.367.770.837.721.489.3

General & Text-oriented VQA

NameLLM#TokensVQAv2GQASQATextVQAChartQADocVQA<br>(val)DocVQA<br>(test)AI2D
MoVA-8BLlama3-8B57683.565.274.777.170.583.883.477.0

Visual Grounding

NameLLM#TokensRefCOCO<br>(val)RefCOCO<br>(testA)RefCOCO<br>(testB)RefCOCO+<br>(val)RefCOCO+<br>(testA)RefCOCO+<br>(testB)RefCOCO‑g<br>(val)RefCOCO‑g<br>(test)
MoVA-8BLlama3-8B57692.1894.7588.2488.4592.2182.8290.0590.23

πŸ’‘ Evaluation

To ensure the reproducibility, we evaluate the models with greedy decoding. We do not evaluate using beam search to make the inference process consistent with the chat demo of real-time outputs.

We follow the evaluation settings of LLaVA. Please see Evaluation.md.

🧠 Acknowledgement

We would like to thank the following repos for their great work:

βœ… Citation

If you find MoVA useful for your research and applications, please kindly cite using this BibTeX:

@article{zong2024mova,
  title={MoVA: Adapting Mixture of Vision Experts to Multimodal Context},
  author={Zong, Zhuofan and Ma, Bingqi and Shen, Dazhong and Song, Guanglu and Shao, Hao and Jiang, Dongzhi and Li, Hongsheng and Liu, Yu},
  journal={arXiv preprint arXiv:2404.13046},
  year={2024}
}