Home

Awesome

MAVIS 🔥: Mathematical Visual Instruction Tuning

MathQA Mathematical Reasoning Multi-Modal

Official repository for the paper "MAVIS: Mathematical Visual Instruction Tuning".

[📖 Paper] [🤗 MAVIS-Caption] [🤗 MAVIS-Instruct] [🏆 Leaderboard]

🌟 Our model is mainly evaluation on MathVerse, a comprehensive visual mathematical benchmark for MLLMs

💥 News

📌 ToDo

👀 About MAVIS

We identify three key areas within Multi-modal Large Language Models (MLLMs) for visual math problem-solving that need to be improved: visual encoding of math diagrams, diagram-language alignment, and mathematical reasoning skills.

In this paper, we propose MAVIS, the first MAthematical VISual instruction tuning paradigm for MLLMs, including two newly curated datasets, a mathematical vision encoder, and a mathematical MLLM:

<p align="center"> <img src="figs/fig1.jpg" width="70%"> <br> </p> <p align="center"> <img src="figs/fig2.jpg" width="70%"> <br> </p> <p align="center"> <img src="figs/fig3.jpg" width="50%"> <br> </p> <p align="center"> <img src="figs/fig4.jpg" width="90%"> <br> </p>

💪 Get Started

Coming in a week!

Data Usage

The temporal data version is released in Google Drive.

We will soon release the final data with much higher-quality.

Training

Inference

:white_check_mark: Citation

If you find MAVIS useful for your research and applications, please kindly cite using this BibTeX:

@misc{zhang2024mavismathematicalvisualinstruction,
      title={MAVIS: Mathematical Visual Instruction Tuning}, 
      author={Renrui Zhang and Xinyu Wei and Dongzhi Jiang and Yichi Zhang and Ziyu Guo and Chengzhuo Tong and Jiaming Liu and Aojun Zhou and Bin Wei and Shanghang Zhang and Peng Gao and Hongsheng Li},
      year={2024},
      eprint={2407.08739},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.08739}, 
}

🧠 Related Work

Explore our additional research on Vision-Language Large Models, focusing on multi-modal LLMs and mathematical reasoning: