Home

Awesome

<p align="center"> <img src="demo_images/vila-logo.jpg" width="20%"/> </p>

VILA: On Pre-training for Visual Language Models

Code License Model License Python 3.10+

VILA arxiv / VILA Demo / VILA Huggingface

๐Ÿ’ก Introduction

VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling video understanding and multi-image understanding capabilities. VILA is deployable on the edge by AWQ 4bit quantization and TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance; (4) token compression extends #video frames. VILA unveils appealing capabilities, including: video reasoning, in-context learning, visual chain-of-thought, and better world knowledge.

๐Ÿ’ก News

Performance

Image QA Benchmarks

$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$Prec.VQAv2GQAVizWizSQA-IVQA-TPOPEMMEMMBMMB-CNSEEDSEED-IMMMU (val)MMMU (test)llava-benchMM-VetAverage
VILA1.5-3Bfp1680.461.553.569.060.485.91442.4463.452.760.967.933.330.875.935.460.2
VILA1.5-3B-AWQint480.061.153.867.860.485.91437.3463.351.459.866.632.731.175.037.359.9
VILA1.5-3B-S2fp1679.861.461.369.663.485.31431.6562.852.260.066.432.831.376.738.660.9
VILA1.5-3B-S2-AWQint479.461.362.369.263.085.81417.0661.651.559.165.733.430.477.136.760.5
Llama-3-VILA1.5-8Bfp1683.063.563.282.068.585.61634.9175.369.966.473.838.632.771.943.266.6
Llama-3-VILA1.5-8B-AWQint480.361.759.379.065.482.91593.6571.064.964.071.136.036.179.037.264.5
VILA1.5-13Bfp1682.864.362.680.165.086.31569.5574.966.365.172.637.933.680.844.366.3
VILA1.5-13B-AWQint482.764.563.379.764.786.71531.3574.766.765.172.637.834.081.946.466.5
VILA1.5-40Bfp1684.364.662.287.273.687.31726.8282.480.269.175.851.946.981.353.072.4
VILA1.5-40B-AWQint484.164.461.386.773.288.21714.7983.279.668.975.649.346.283.051.472.1

<sup>NOTE: VQAV2 and VizWiz are test-dev, the average accuracy is calculated over all datasets and MME numbers are divided by 20.</sup>

Video QA Benchmarks

$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$Prec.Perception TestActivityNetMSVDMSRVTTTGIFEgoSchema (test)CinePile
VILA1.5-3Bfp164750.276.657.551.742.637.9
VILA1.5-3B-S2fp1649.750.776.957.651.7
Llama-3-VILA1.5-8Bfp1654.154.378.360.154.150.448.7
VILA1.5-13Bfp1653.654.777.960.25652.250.1
VILA1.5-40Bfp16545880.16358.258.751.3

Inference speed ( Token/sec )

$~~~~~~$PrecisionA1004090Orin
VILA1.5-3Bfp16104.6137.625.4
VILA1.5-3B-AWQint4182.8215.542.5
VILA1.5-3B-S2fp16104.3137.224.6
VILA1.5-3B-S2-AWQint4180.2219.340.1
Llama-3-VILA1.5-8Bfp1674.957.410.2
Llama-3-VILA1.5-8B-AWQint4168.9150.228.7
VILA1.5-13Bfp1650.9OOM6.1
VILA1.5-13B-AWQint4115.9105.720.6
VILA1.5-40Bfp16OOMOOM--
VILA1.5-40B-AWQint457.0OOM--

<sup>NOTE: Measured using the TinyChat backend at batch size = 1.</sup>

VILA Examples

Video captioning

https://github.com/Efficient-Large-Model/VILA/assets/156256291/c9520943-2478-4f97-bc95-121d625018a6

Prompt: Elaborate on the visual and narrative elements of the video in detail.

Caption: The video shows a person's hands working on a white surface. They are folding a piece of fabric with a checkered pattern in shades of blue and white. The fabric is being folded into a smaller, more compact shape. The person's fingernails are painted red, and they are wearing a black and red garment. There are also a ruler and a pencil on the surface, suggesting that measurements and precision are involved in the process.

In context learning

<img src="demo_images/demo_img_1.png" height="239"> <img src="demo_images/demo_img_2.png" height="250">

Multi-image reasoning

<img src="demo_images/demo_img_3.png" height="193">

VILA on Jetson Orin

https://github.com/Efficient-Large-Model/VILA/assets/7783214/6079374c-0787-4bc4-b9c6-e1524b4c9dc4

VILA on RTX 4090

https://github.com/Efficient-Large-Model/VILA/assets/7783214/80c47742-e873-4080-ad7d-d17c4700539f

</details>

Installation

./environment_setup.sh vila

Training

VILA training contains three steps, for specific hyperparameters, please check out the scripts/v1_5 folder:

Step-1: Alignment

We utilize LLaVA-CC3M-Pretrain-595K dataset to align the textual and visual modalities.

The stage 1 script takes in two parameters and it can run on a single 8xA100 node. BASE_MODEL_PATH points to a online or local huggingface repository, such as NousResearch/Llama-2-7b-hf. OUTPUT_NAME points to a target directory under checkpoints, which will save the trained multimodal projector afterwards.

bash scripts/v1_5/paper/1_mm_align.sh [BASE_MODEL_PATH] [OUTPUT_NAME]

Step-2: Pretraining

We use MMC4 and Coyo dataset to train VLM with interleaved image-text pairs.

bash scripts/v1_5/paper/2_pretrain_mmc4_coyo.sh [CODE_PATH] [BASE_MODEL_PATH] [STAGE1_PATH] [OUTPUT_NAME]

The stage 2 script takes in four arguments. CODE_PATH is the absolute path to our VILA codebase, BASE_MODEL_PATH has similar meaning to what is presented in the stage 1 script. STAGE1_PATH points to the OUTPUT_NAME of stage 1 (i.e. where the stage 1 checkpoint is stored). OUTPUT_NAME is the desired folder name under checkpoints that saves the pretraining checkpoint. The script we provided for this stage is executed on slurm, and we expect it to execute on 16 nodes (128 GPUs).

Step-3: Supervised fine-tuning

This is the last stage of VILA training, in which we tune the model to follow multimodal instructions on a subset of M3IT, FLAN and ShareGPT4V. This stage runs on a 8xA100 node.

bash scripts/v1_5/paper/3_sft.sh [STAGE2_PATH] [OUTPUT_NAME]

The stage 3 script takes in two arguments. STAGE2_PATH points to the OUTPUT_NAME of the stage 2 script (i.e. where the stage 2 checkpoint is stored). OUTPUT_NAME is the desired folder name under checkpoints that stores the final checkpoint.

Evaluations

Image Benchmarks

You can follow Llava1.5 eval to download all datasets. After downloading all datasets, please put them under playground/data/eval.

Please make the following changes to the MME evaluation script. Please search for:

data_path = "MME_Benchmark_release_version"

and replace it with:

data_path = os.path.join(script_dir, "MME_Benchmark_release_version")

We provide a push-the-button script to perform evaluation on all 10 datasets that do not require GPT-assisted evaluation:

./scripts/v1_5/eval/eval_all.sh [CHECKPOINT_PATH] [MODEL_NAME] [CONV_MODE]

This script takes in two parameters, CHECKPOINT_PATH points to the stage 3 model checkpoint, and MODEL_NAME will be the name of evaluation results.

VQAv2 and Vizwiz evaluations are hosted on eval.ai. You need to register an account and create a team to be able to submit eval.

MMBench and MMBench_CN eval are hosted on another evaluation server. Make sure you change the name of the file before submitting, otherwise the server caches results and will always return wrong result to you.

We provide a quick script to automatically organize the prediction files that need to be submitted to servers:

python scripts/v1_5/eval/copy_predictions.py [MODEL_NAME]

You will be able to find the predictions under playground/data/predictions_upload/[MODEL_NAME] after executing this script.

Video Benchmarks

Please follow the evaluation steps in Video-LLaVA for dataset preparation.

./scripts/v1_5/eval/video_chatgpt/run_all.sh [CHECKPOINT_PATH] [MODEL_NAME] [CONV_MODE]
./scripts/v1_5/eval/video_chatgpt/eval_all.sh [MODEL_NAME]

Inference

We provide snippets for quick inference with user prompts and images.

Llama-3-VILA1.5-8B inference:

python -W ignore llava/eval/run_vila.py \
    --model-path Efficient-Large-Model/Llama-3-VILA1.5-8b-Fix \
    --conv-mode llama_3 \
    --query "<image>\n Please describe the traffic condition." \
    --image-file "av.png"

VILA1.5-40B inference:

python -W ignore llava/eval/run_vila.py \
    --model-path Efficient-Large-Model/VILA1.5-40b \
    --conv-mode hermes-2 \
    --query "<image>\n Please describe the traffic condition." \
    --image-file "av.png"

VILA1.5-3B video inference:

python -W ignore llava/eval/run_vila.py \
    --model-path Efficient-Large-Model/VILA1.5-3b \
    --conv-mode vicuna_v1 \
    --query "<video>\n Please describe this video." \
    --video-file "demo.mp4"

Quantization and Deployment

Our VILA models are quantized by AWQ into 4 bits for efficient inference on the edge. We provide a push-the-button script to quantize VILA with AWQ.

Running VILA on desktop GPUs and edge GPUs

We support AWQ-quantized 4bit VILA on GPU platforms via TinyChat. We provide a tutorial to run the model with TinyChat after quantization. We also provide an instruction to launch a Gradio server (powered by TinyChat and AWQ) to serve 4-bit quantized VILA models.

Running VILA on laptops

We further support our AWQ-quantized 4bit VILA models on various CPU platforms with both x86 and ARM architectures with our TinyChatEngine. We also provide a detailed tutorial to help the users deploy VILA on different CPUs.

Running VILA API server

A simple API server has been provided to serve VILA models. The server is built on top of FastAPI and Huggingface Transformers. The server can be run with the following command:

With CLI

python -W ignore server.py \
    --port 8000 \
    --model-path Efficient-Large-Model/VILA1.5-3B \
    --conv-mode vicuna_v1

With Docker

docker build -t vila-server:latest .
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
    -v ./hub:/root/.cache/huggingface/hub \
    -it --rm -p 8000:8000 \
    -e VILA_MODEL_PATH=Efficient-Large-Model/VILA1.5-3B \
    -e VILA_CONV_MODE=vicuna_v1 \
    vila-server:latest

Then you can call the endpoint with the OpenAI SDK as follows:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000",
    api_key="fake-key",
)
response = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Whatโ€™s in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://blog.logomyway.com/wp-content/uploads/2022/01/NVIDIA-logo.jpg",
                        # Or you can pass in a base64 encoded image
                        # "url": "data:image/png;base64,<base64_encoded_image>",
                    },
                },
            ],
        }
    ],
    max_tokens=300,
    model="VILA1.5-3B",
    # You can pass in extra parameters as follows
    extra_body={"num_beams": 1, "use_cache": False},
)
print(response.choices[0].message.content)

<sup>NOTE: This API server is intended for evaluation purposes only and has not been optimized for production use. It has only been tested on A100 and H100 GPUs.</sup>

Checkpoints

We release VILA1.5-3B, VILA1.5-3B-S2, Llama-3-VILA1.5-8B, VILA1.5-13B, VILA1.5-40B and the 4-bit AWQ-quantized models VILA1.5-3B-AWQ, VILA1.5-3B-S2-AWQ, Llama-3-VILA1.5-8B-AWQ, VILA1.5-13B-AWQ, VILA1.5-40B-AWQ.

๐Ÿ”’ License

Team

*Yao Lu: Nvidia*Hongxu Yin: Nvidia*Ji Lin: OpenAI (work done at Nvidia and MIT)
Wei Ping: NvidiaPavlo Molchanov: NvidiaAndrew Tao: Nvidia
Haotian Tang: MITShang Yang: MITLigeng Zhu: Nvidia, MIT
Wei-Chen Wang: MITFuzhao Xue: Nvidia, NUSYunhao Fang: Nvidia, UCSD
Yukang Chen: NvidiaZhuoyang Zhang: NvidiaYue Shen: Nvidia
Wei-Ming Chen: NvidiaHuizi Mao: NvidiaBaifeng Shi: Nvidia, UC Berkeley
Jan Kautz: NvidiaMohammad Shoeybi: NvidiaSong Han: Nvidia, MIT

Citations

@misc{lin2023vila,
      title={VILA: On Pre-training for Visual Language Models},
      author={Ji Lin and Hongxu Yin and Wei Ping and Yao Lu and Pavlo Molchanov and Andrew Tao and Huizi Mao and Jan Kautz and Mohammad Shoeybi and Song Han},
      year={2023},
      eprint={2312.07533},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement