Home

Awesome

Ovis: Structural Embedding Alignment for Multimodal Large Language Model

Ovis (Open VISion) is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. For a comprehensive introduction, please refer to the Ovis paper.

<div style="text-align: center;"> <img style="max-width: 100%;" src="docs/ovis-illustration.png" alt="Ovis Illustration"/> </div>

Release

Contents

Install

Ovis has been tested with Python 3.10, Torch 2.1.2, Transformers 4.43.2, and DeepSpeed 0.14.0. For a comprehensive list of package dependencies, please consult the requirements.txt file. Before training or inference, please install Ovis as follows.

git clone git@github.com:AIDC-AI/Ovis.git
conda create -n ovis python=3.10 -y
conda activate ovis
cd Ovis
pip install -r requirements.txt
pip install -e .

Model

Ovis can be instantiated with popular LLMs (e.g., Llama3, Gemma2). We provide the following pretrained Ovis MLLMs:

Ovis MLLMsViTLLMModel Weights
Ovis1.5-Llama3-8BSiglip-400MLlama3-8B-InstructHuggingface
Ovis1.5-Gemma2-9BSiglip-400MGemma2-9B-ItHuggingface

Performance

We evaluate Ovis1.5 across various multimodal benchmarks using VLMEvalKit and compare its performance to leading MLLMs with similar parameter scales.

MiniCPM-Llama3-V2.5GLM-4V-9BOvis1.5-Llama3-8BOvis1.5-Gemma2-9B
Open Weights
Open Datasets
MMTBench-VAL57.648.860.762.7
MMBench-EN-V1.17468.778.278.0
MMBench-CN-V1.170.167.175.275.1
MMStar51.854.857.258.7
MMMU-Val45.846.948.649.8
MathVista-Mini54.351.162.465.7
HallusionBenchAvg42.44544.548.0
AI2D78.471.282.584.7
OCRBench725776743756
MMVet52.85852.256.5
RealWorldQA63.56664.666.9
CharXiv Reasoning24.9-28.228.4
CharXiv Descriptive59.3-60.262.6

Dataset

All training datasets are summarized in the JSON file located at ovis/train/dataset/dataset_info_v1_5.json. Each dataset entry includes the following attributes:

We provide the meta_file for each training dataset at Huggingface. The images can be downloaded from their respective sources listed below.

dataset nameimage dirimage source
pixelprose-14mpixelprose-14mimage_url of each sample in pixelprose-14m.parquet
wikipedia-348kwikipedia-348kimage_url of each sample in wikipedia-348k.parquet
ocr-469kocr-469kimage_url of each sample in ocr-469k.parquet
allava-*allavahttps://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V
cc12m-description-387k / cc12m-qa-387kovis_cc12mhttps://huggingface.co/datasets/AIDC-AI/Ovis-dataset
A-OKVQA-18k / vsr-train-dev-12kCOCOhttps://cocodataset.org
CLEVR-MATH-85kCLEVR-MATHhttps://github.com/dali-does/clevr-math
FigureQA-100kFigureQAhttps://www.microsoft.com/en-us/research/project/figureqa-dataset
Geometry-2kGeometry3Khttps://github.com/lupantech/InterGPS
IAM-7kIAM-linehttps://huggingface.co/datasets/Teklia/IAM-line
InfographicVQA-24k / infovqa-multi-conversation-5kInfoVQAhttps://rrc.cvc.uab.es/?ch=17&com=downloads
MathQA-395k--
MathV-360kMathV-360Khttps://huggingface.co/datasets/Zhiqiang007/MathV360K
MathV-CoT-360kMathV-360Khttps://huggingface.co/datasets/Zhiqiang007/MathV360K
OpenCQA-5kOpenCQAhttps://github.com/vis-nlp/OpenCQA
PlotQA-157kPlotQAhttps://github.com/NiteshMethani/PlotQA
Super-CLEVR-30kSuper-CLEVRhttps://github.com/Lizw14/Super-CLEVR
Symbolic-Reasoning-TabMW-31kSymbolic-Reasoning-TabMWPhttps://promptpg.github.io
ViQuAE-2kViQuAEhttps://github.com/PaulLerner/ViQuAE
ai2d-mc-15kAI2Dhttps://huggingface.co/datasets/AIDC-AI/Ovis-dataset
c7s-*Cambrian_10Mhttps://huggingface.co/datasets/nyu-visionx/Cambrian-10M
doclaynet-65kDocLayNethttps://huggingface.co/datasets/ds4sd/DocLayNet
doclie-real-100kovis-docilehttps://huggingface.co/datasets/AIDC-AI/Ovis-dataset
docmatix-si-900kdocmatixhttps://huggingface.co/datasets/HuggingFaceM4/Docmatix
dtvqa-27kDT-VQAhttps://github.com/ShuoZhang2003/DT-VQA
funsd-1kfunsdhttps://guillaumejaume.github.io/FUNSD
hme-74kHMEhttps://github.com/Phymond/HME100K
hwl-eng-10kHWL_OCR_ENGhttps://ai.100tal.com/openData/ocr
icqa-train-val-40kiconqahttps://iconqa.github.io/
kvqa-25kKVQAhttp://malllabiisc.github.io/resources/kvqa
lrv-instruct-and-chart-343kLRVhttps://github.com/FuxiaoLiu/LRV-Instruction
mmc-base-410kMMChttps://huggingface.co/datasets/xywang1/MMC
mmmath-6kmmmathhttps://huggingface.co/datasets/THU-KEG/MM_Math
ocr-vqa-multi-conversation-207kocr-vqahttps://ocr-vqa.github.io/
okvqa-14kOK-VQAhttps://okvqa.allenai.org/index.html
orandCAR-5kORAND-CARhttps://www.orand.cl/icfhr2014-hdsr
poie-9kpoiehttps://github.com/jfkuang/cfam
sroie-3ksroiehttps://www.kaggle.com/datasets/urbikn/sroie-datasetv2/data
stvqa-78kST-VQAhttps://rrc.cvc.uab.es/?ch=11
tqa-train-34ktextvqahttps://textvqa.org/dataset
tqa-train-val-20kTQAhttps://allenai.org/data/tqa
visualdialog-125kVisualDialoghttps://visualdialog.org
vqa-v2-multi-conversation-184kVQAhttps://visualqa.org

Below is an example of the folder structure consistent with dataset_info_v1_5.json. You can alter the folder structure as needed and modify dataset_info_v1_5.json accordingly.

|-- mllm_datasets
    |-- meta_files
        |-- v1
        |-- v1_5
            |-- pixelprose-14m.parquet
            |-- cc12m-description-387k.json
            |-- A-OKVQA-18k.json
            |-- CLEVR-MATH-85k.json
            ...
    |-- images
        |-- pixelprose-14m
        |-- ovis_cc12m
        |-- COCO
        |-- CLEVR-MATH
        ...

Train

Ovis is trained in three stages, with each stage's training scripts located in the scripts directory. Before starting the training, ensure you properly set the ROOT variable in the scripts. Below are the commands to train Ovis1.5-Llama3-8B:

bash scripts/v1_5/Ovis1.5-Llama3-8B-S1.sh
bash scripts/v1_5/Ovis1.5-Llama3-8B-S2.sh
bash scripts/v1_5/Ovis1.5-Llama3-8B-S3.sh

Inference

We provide an inference wrapper in ovis/serve/runner.py, which can be used as:

from PIL import Image
from ovis.serve.runner import RunnerArguments, OvisRunner
image = Image.open('IMAGE_PATH')
text = 'PROMPT'
runner_args = RunnerArguments(model_path='MODEL_PATH')
runner = OvisRunner(runner_args)
generation = runner.run([image, text])

Based on Gradio, Ovis can also be accessed via a web user interface:

python ovis/serve/server.py --model_path MODEL_PATH --port PORT

Citation

If you find Ovis useful, please cite the paper

@article{lu2024ovis,
  title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model}, 
  author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
  year={2024},
  journal={arXiv:2405.20797}
}

Team

This work is a collaborative effort by the MarcoVL team. We would also like to provide links to the following MLLM papers from our team:

License

The project is licensed under the Apache 2.0 License and is restricted to uses that comply with the license agreements of Qwen, Llama3, Gemma2, Clip, and Siglip.