Awesome
Mantis: Interleaved Multi-Image Instruction Tuning (TMLR 2024)
This repository contain the code for our TMLR24 paper Mantis (https://arxiv.org/abs/2405.01483).
<a target="_blank" href="https://arxiv.org/abs/2405.01483"> <img style="height:22pt" src="https://img.shields.io/badge/-Paper-black?style=flat&logo=arxiv"></a> <a target="_blank" href="https://github.com/TIGER-AI-Lab/Mantis"> <img style="height:22pt" src="https://img.shields.io/badge/-Code-green?style=flat&logo=github"></a> <a target="_blank" href="https://tiger-ai-lab.github.io/Mantis/"> <img style="height:22pt" src="https://img.shields.io/badge/-π%20Website-red?style=flat"></a> <a target="_blank" href="https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct"> <img style="height:22pt" src="https://img.shields.io/badge/-π€%20Dataset-red?style=flat"></a> <a target="_blank" href="https://huggingface.co/spaces/TIGER-Lab/Mantis"> <img style="height:22pt" src="https://img.shields.io/badge/-π€%20Demo-red?style=flat"></a> <a target="_blank" href="https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4"> <img style="height:22pt" src="https://img.shields.io/badge/-π€%20Models-red?style=flat"></a> <a target="_blank" href="https://twitter.com/DongfuJiang/status/1786552974598078677"> <img style="height:22pt" src="https://img.shields.io/badge/-Tweet-blue?style=flat&logo=twitter"></a> <br>
π€ The recent years have witnessed a great array of large multimodal models (LMMs) to effectively solve single-image vision language tasks. However, their abilities to solve multi-image visual language tasks is yet to be improved.
π¦ The existing multi-image LMMs (e.g. OpenFlamingo, Emu, Idefics, etc) mostly gain their multi-image ability through pre-training on hundreds of millions of noisy interleaved image-text data from web, which is neither efficient nor effective.
π₯ Therefore, we present Mantis, an LLaMA-3 based LMM with interleaved text and image as inputs, train on Mantis-Instruct under academic-level resources (i.e. 36 hours on 16xA100-40G).
π Mantis achieves state-of-the-art performance on 5 multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and maintaining a strong single-image performance on par with CogVLM and Emu2.
π₯News
- [2024-11-18] We add support for training π€ Qwen2-VL; script here: train_qwen2_vl.sh; Supports liger-kernel.
- [2024-11-15] π Mantis is accepted to TMLR 2024
- [2024-08-22] We add support for training π€ Idefics-3; script here: train_idefics3.sh
- [2024-08-05] VLMEvalKit now supports the evaluation of Mantis model. Thanks to the efforts of BrenchCC
- [2024-08-05] We release the Wandb training curves of Mantis-8B-CLIP-LLaMA-3, Mantis-8B-SigLIP-LLaMA-3, and Mantis-8B-Idefics2 for training reproduction.
- [2024-07-23] lmms-eval now supports the evaluation of Mantis model. Thanks to the efforts of EvolvingLMMs-Lab Team.
- [2024-05-23] π₯Excited to announce our current SoTA Mantis-8B-Idefics2 model! Check the model and demo now!
- [2024-05-03] We have release our training codes, dataset, evaluation codes codes to the community! Check the following sections for more details.
- [2024-05-02] We release the first multi-image abled LMM model Mantis-8B based on LLaMA3! Interact with Mantis-8B-SigLIP on Hugging Face Spaces or Colab Demo
- [2024-05-02] Mantis's technical report is now available on arXiv. Kudos to the team!
Installation
conda create -n mantis python=3.10
conda activate mantis
pip install -e .
# install flash-attention
pip install flash-attn --no-build-isolation
Inference
You can run inference with the following command:
cd examples
python run_mantis.py
Training
Install the requirements with the following command:
pip install -e .[train,eval]
cd mantis/train
Our training scripts follows the coding format and model structure of Hugging face. Different from LLaVA Github repo, you can directly load our models from Hugging Face model hub.
Training examples with different data formats
(These example data are all pre-prepared in the data/examples/
folder, so you can check the format of the data and the debug the training script directly. set CUDA_VISIBLE_DEVICES
to the GPU you want to use.)
- training with text-image interleaved data (see example data)
cd mantis/train
bash scripts/train_example_chat.sh # Q-lora, 1 GPU required
- training with video-text interleaved data (see example data)
cd mantis/train
bash scripts/train_example_video.sh # Q-lora, 1 GPU required
- training with classification data (see example data)
cd mantis/train
bash scripts/train_example_classification.sh # full-finetune, might need 8 GPUs or more
Training examples with different models
We support training of Mantis based on the Fuyu architecture and the LLaVA architecture. You can train the model with the following command:
Training Mantis based on LLaMA3 with CLIP/SigLIP encoder:
- Pretrain Mantis-LLaMA3 Multimodal projector on pretrain data (Stage 1):
bash scripts/pretrain_mllava.sh
- Fine-tune the pretrained Mantis-LLaMA3 on Mantis-Instruct (Stage 2):
bash scripts/train_mllava.sh
Training Mantis based on Fuyu-8B:
- Fine-tune Fuyu-8B on Mantis-Instruct to get Mantis-Fuyu:
bash scripts/train_fuyu.sh
Note:
- Our training scripts contain auto inference bash commands to infer the number of GPUs and the number of GPU nodes use for the training. So you only need to modify the data config path and the base models.
- The training data will be automatically downloaded from hugging face when you run the training scripts.
See mantis/train/README.md for more details.
Check all the training scripts in mantist/train/scripts
Evaluation
To reproduce our evaluation results, please check mantis/benchmark/README.md
Data
- π€ Mantis-Instruct 721K text-image interleaved datasets for multi-image instruction tuning
- π€ Mantis-Eval 217 high-quality examples for evaluating LMM's multi-image skills
Downloading
you can easily preparing Mantis-Insturct's downloading with the following command (The downloading and extracting might take about an hour):
python data/download_mantis_instruct.py --max_workers 8
Model Zoo
Mantis Models
We provide the following models in the π€ Hugging Face model hub:
- TIGER-Lab/Mantis-8B-Idefics2
- TIGER-Lab/Mantis-8B-clip-llama3
- TIGER-Lab/Mantis-8B-siglip-llama3
- TIGER-Lab/Mantis-8B-Fuyu
Run models
- Run Mantis-8B-Idefics2:
cd examples && python run_mantis_idefics2.py
- Mantis-8B-siglip-llama3:
cd examples && python run_mantis.py
- Mantis-8B-Fuyu:
cd examples && python run_mantis_fuyu.py
Chat CLI
We provide a simple chat CLI for Mantis models. You can run the following command to chat with Mantis-8B-siglip-llama3:
python examples/chat_mantis.py
Intermediate Checkpoints
The following intermediate checkpoints after pre-training the multi-modal projectors are also available for experiments reproducibility (Please note the follwing checkpoints still needs further fine-tuning on Mantis-Instruct to be intelligent. They are not working models.):
Acknowledgement
- Thanks LLaVA and LLaVA-hf team for providing the LLaVA codebase, and hugging face compatibility!
- Thanks Haoning Wu for providing codes of MVBench evaluation!
Star History
Citation
@article{Jiang2024MANTISIM,
title={MANTIS: Interleaved Multi-Image Instruction Tuning},
author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
journal={Transactions on Machine Learning Research},
year={2024},
volume={2024},
url={https://openreview.net/forum?id=skLtdUVaJa}
}