Home

Awesome

<img src="figures/crayon_emoji.png" style="vertical-align: -10px;" :height="50px" width="50px"> CoLLaVO: Crayon Large Language and Vision mOdel [arxiv]

šŸ“° News

crayon_demo

šŸŽØ In-Progress


Official PyTorch implementation code for realizing the technical part of Crayon Large Language and Vision mOdel (CoLLaVO) to improve performance of numerous zero-shot vision language tasks. This code is developed on two baseline codes of XDecoder: Generalized Decoding for Pixel, Image, and Language accepted in CVPR 2023 and InternLM for Technical Paper.

šŸļø Summary

The remarkable success of Large Language Models (LLMs) and instruction tuning drives the evolution of Vision Language Models (VLMs) towards a versatile general-purpose model. Yet, it remains unexplored whether current VLMs genuinely possess quality object-level image understanding capabilities determined from 'what objects are in the image?' or 'which object corresponds to a specified bounding box?'. Our findings reveal that the image understanding capabilities of current VLMs are strongly correlated with their zero-shot performance on vision language (VL) tasks. This suggests that prioritizing basic image understanding is crucial for VLMs to excel at VL tasks. To enhance object-level image understanding, we propose Crayon Large Language and Vision mOdel (<img src="figures/crayon_emoji.png" style="vertical-align: -5px;" :height="20px" width="20px"> CoLLaVO), which incorporates instruction tuning with Crayon Prompt as a new visual prompt tuning scheme based on panoptic color maps. Furthermore, we present a learning strategy of Dual QLoRA to preserve object-level image understanding without forgetting it during visual instruction tuning, thereby achieving a significant leap in numerous VL benchmarks in a zero-shot setting.

šŸš€ Highlights

<img src="figures/fig_close.png" width="511" height="350"> <figcaption> Figure. Zero-shot performance of CoLLaVO-7B on challenging VL datasets compared with closed-source VLMs: GPT-4V, Gemini-Pro, Qwen-VL-Plus. Note: The scores of MME are rescaled by 1/20 to match the scales with the accuracies of others. </figcaption> <img src="figures/fig_seed.png" width="532" height="350"> <figcaption> Figure. Demonstrating the efficiency and effectiveness of CoLLaVO compared with those of other VLMs. Note that accuracy is measured on SEED-IMG. </figcaption> <img src="figures/fig_pope.png" width="661" height="350"> <figcaption> Table. Measuring four metrics: Accuracy, Precision, Recall, F1-score on three types of question answering to evaluate hallucination of vision language models: Adversarial, Random, and Popular in POPE. </figcaption>

šŸ“– Citation

@article{lee2024collavo,
  title={CoLLaVO: Crayon Large Language and Vision mOdel},
  author={Lee, Byung-Kwan and Park, Beomchan and Kim, Chae Won and Ro, Yong Man},
  journal={arXiv preprint arXiv:2402.11248},
  year={2024}
}

<img src="figures/crayon_emoji.png" style="vertical-align: -2px;" :height="20px" width="20px"> Download CoLLaVO-7B (Under Preparation)

GQASQA-IMGTextVQAPOPEMME-PMME-CMM-BenchMMB-CNMM-VetQ-Bench
BLIP2-13B42.461.042.585.31293.8290.0--22.4-
InstructBLIP-7B49.549.260.550.1--36.023.725.656.7
Qwen-VL-Chat-7B57.568.261.5-1487.5360.760.656.7--
LLaVA1.5-7B62.066.858.285.91510.7293.864.358.330.558.7
CoLLaVO-7B61.480.764.287.21689.7525.083.082.140.367.6

šŸ“‚ Directory Layout

.
ā”œā”€ā”€ asset                           # Required package lists (Important)
ā”œā”€ā”€ trainer                         # Training CoLLaVO and initializing optimizer (Not Support Now)
ā”œā”€ā”€ utils                           # Michallengeous util files (Not important)
ā”œā”€ā”€ collavo                         # CoLLaVO architecture & loading collavo (Important)
ā”œā”€ā”€ pipeline                        # Evaluating zero-shot vision language tasks (Important)
ā”‚
ā”œā”€ā”€ datasets                        # Important
ā”‚   ā”œā”€ā”€ dataset_mappers             # data parsing including augmentation for loader
ā”‚   ā”œā”€ā”€ evaluation                  # measuring evaluation for each dataset 
ā”‚   ā””ā”€ā”€ registration                # register dataset
ā”‚
ā”œā”€ā”€ configs                         
ā”‚   ā”œā”€ā”€ accel                       # Accelerate Config files (Support DDP)
ā”‚   ā””ā”€ā”€ collavo_eval.yaml           # Config of evaluating collavo
ā”‚
ā”œā”€ā”€ modeling                        # Not Important
ā”‚   ā”œā”€ā”€ architectures               # training the prototype of collavo (Not Support Now)
ā”‚   ā”œā”€ā”€ utils                       # utils for modeling (Not important)
ā”‚   ā””ā”€ā”€ BaseModel                   # loading and saving model
ā”‚
ā”œā”€ā”€ lbk_entry.py                    # main code of control tower (Important)
ā”œā”€ā”€ run                             # bash file for running the evaluation (Important)
ā”‚
ā”œā”€ā”€ install                         # install required packages (Important)
ā””ā”€ā”€ README.md

šŸ’” How to Run?

In bash file of install, you should first run the following lines.

conda create -n collavo python=3.9
conda activate collavo
conda clean -a && pip cache purge
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r assets/requirements/requirements.txt
pip install -r assets/requirements/requirements_custom.txt
pip install flash-attn --no-build-isolation

In addition, you should set the following environment variables to set the dataset path.

export DETECTRON2_DATASETS=/path/to/dataset
export DATASET=/path/to/dataset
export DATASET2=/path/to/dataset
export VLDATASET=/path/to/dataset

Download CoLLaVO-7B Model and then you can run the demo.py

"""
CoLLaVO-7B

Simple Six Steps
"""

# [1] Loading Image
from PIL import Image
from torchvision.transforms import Resize
from torchvision.transforms.functional import pil_to_tensor
image_path = "figures/crayon_image.jpg"
image = Resize(size=(490, 490), antialias=False)(pil_to_tensor(Image.open(image_path)))

# [2] Instruction Prompt
prompt = "Describe this image in detail"

# [3] Loading CoLLaVO
from collavo.load_collavo import prepare_collavo
collavo_model, collavo_processor, seg_model, seg_processor = prepare_collavo(collavo_path='BK-Lee/CoLLaVO-7B', bits=4, dtype='fp16')

# [4] Pre-processing for CoLLaVO
collavo_inputs = collavo_model.demo_process(image=image, 
                                    prompt=prompt, 
                                    processor=collavo_processor,
                                    seg_model=seg_model,
                                    seg_processor=seg_processor,
                                    device='cuda:0')

# [5] Generate
import torch
with torch.inference_mode():
    generate_ids = collavo_model.generate(**collavo_inputs, do_sample=True, temperature=0.9, top_p=0.95, max_new_tokens=256, use_cache=True)

# [6] Decoding
answer = collavo_processor.batch_decode(generate_ids, skip_special_tokens=True)[0].split('[U')[0]
print(answer)

If you want to valiate zero-shot performances in numerous datasets, then running the bash file 'run'.

# CoLLaVO-Experiment
GPU_DEVICE="0,1,2,3,4,5"
length=${#GPU_DEVICE}
n_gpu=$(((length+1)/2))
main_port=10000
test_batch=1

CUDA_VISIBLE_DEVICES=$GPU_DEVICE \
accelerate launch --config_file configs/accel/ddp_accel.yaml \
    --num_processes=$n_gpu \
    --main_process_port=$main_port \
    lbk_entry.py eval \
    --conf_files configs/collavo_eval.yaml \
    --overrides \
    WANDB False \
    DATASETS.TEST mme \
    PIPELINE MMEPipeline \
    MME.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    SCIENCEQA.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    POPE.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    MMBENCH.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    MMVET.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    AI2D.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    HALLUSIONBENCH.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    MATHVISTA.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    QBENCH.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    SEED.TEST.BATCH_SIZE_TOTAL $((n_gpu * test_batch)) \
    SAVE_DIR /path/to/CoLLaVO_DIR \
    WEIGHT True \
    RESUME_FROM /path/to/CoLLaVO_WEIGHT \

Note that, you should change the two parts to evaluate the dataset you want. (This is very important!!)

DATASETS.TEST

PIPELINE

GPT-4 Aid Evalution for AI2D, MM-Vet, SEED-IMG

This code will be soon public!

šŸ… Download Datasets

šŸ“‚ Dataset Directory (/path/to/dataset)

.
ā”œā”€ā”€ GQA                             # GQA
ā”œā”€ā”€ ScienceQA                       # SQA-IMG
ā”œā”€ā”€ TextVQA                         # TextVQA
ā”œā”€ā”€ POPE                            # POPE
ā”œā”€ā”€ MME_Benchmark_release_version   # MME
ā”œā”€ā”€ MMBench                         # MM-Bench
ā”œā”€ā”€ mm-vet                          # MM-Vet
ā”œā”€ā”€ LLVisionQA-QBench               # Q-Bench
ā”œā”€ā”€ MathVista                       # MathVista
ā”œā”€ā”€ SEED-Bench                      # SEED-IMG
ā”œā”€ā”€ ai2d                            # AI2D
ā””ā”€ā”€ HallusionBench                  # HallusionBench