Home

Awesome

Enabling the finetuning of the latest Large Multimodal Models

:exclamation: This codebase will NOT be under active maintainence/update after November 2024, as the main contributor/maintainer, Jingyang Zhang will be graduating.

About

More and more large multimodal models (LMMs) are being released from time to time, but the finetuning of these models is not always straightforward. This codebase aims to provide a unified, minimal structure for LMM finetuning. Key design ideas include:

The codebase is quite flexible. It supports the finetuning of various types of LMMs, including:

See supported_models.md for the full list of supported models. For training strategy, 1) full-finetuning, 2) lora, and 3) q-lora are supported for the LLM component, while 1) full-finetuning and 2) lora are supported for the vision encoder/backbone.

<!--- *TODOS:* - [x] Support training with text-only data. - [x] Support tuning vision models and projectors. - [ ] Add more models, including llava-onvision, idefics2, glm4-v, minicpm, etc. :raising_hand: If you would like to have a model available, feel free to open an issue. --> <details> <summary>What's different from other training frameworks, e.g., LLaMA-Factory, xtuner, swift?</summary>

These are great projects/frameworks with large scale and high-degree optimization. However, due to their scale and complexity, they could be less transparent and less easy to get started (e.g., I personally feel quite lost when trying to use those frameworks, with a bunch of questions like "how should I format my data"). This codebase (lmms-finetune) is instead designed to be lightweight and simple, meaning that it's much more likely for you to quickly get started and be able to know almost every detail of the training process if you want. In other words, this is a minimal workable codebase that supports LMM finetuning, while facilitating quick experiments, flexible modifications, and easy integrations of new models.

</details>

News

Installation

# clone this repo
git clone https://github.com/zjysteven/lmms-finetune.git

# set up a conda environment
conda create -n lmms-finetune python=3.10 -y
conda activate lmms-finetune
## this will install the latest version of torch
## feel free to change it to a specific version
python -m pip install -r requirements.txt

## optionally install flash attention
python -m pip install --no-cache-dir --no-build-isolation flash-attn

Usage

A workable example training run (of LLaVA-NeXT-Video-7B) is showcased in this colab notebook, which is a good starting point to get a sense of how to use this codebase. The following sections provide a more detailed guide on how to finetune a model.

<details> <summary><b>0. See if the model you want to finetune is supported</b></summary>

Browse supported_models.md. Or run python supported_models.py, which will for example show things like

Supported models:
  Model ID                      : HuggingFace Path
  ------------------------------------------------
  llava-1.5-7b                  : llava-hf/llava-1.5-7b-hf
  llava-1.5-13b                 : llava-hf/llava-1.5-13b-hf
  llava-next-video-7b           : llava-hf/LLaVA-NeXT-Video-7B-hf
  llava-next-video-7b-32k       : llava-hf/LLaVA-NeXT-Video-7B-32K-hf
  llava-next-video-34b          : llava-hf/LLaVA-NeXT-Video-34B-hf
  llava-interleave-qwen-0.5b    : llava-hf/llava-interleave-qwen-0.5b-hf
  llava-interleave-qwen-7b      : llava-hf/llava-interleave-qwen-7b-hf
  llava-onevision-0.5b-ov       : llava-hf/llava-onevision-qwen2-0.5b-ov-hf
  llava-onevision-7b-ov         : llava-hf/llava-onevision-qwen2-7b-ov-hf
  llava-onevision-72b-ov        : llava-hf/llava-onevision-qwen2-72b-ov-hf
  qwen-vl-chat                  : Qwen/Qwen-VL-Chat
  phi3-v                        : microsoft/Phi-3-vision-128k-instruct
  qwen2-vl-2b-instruct          : Qwen/Qwen2-VL-2B-Instruct
  qwen2-vl-7b-instruct          : Qwen/Qwen2-VL-7B-Instruct
  llama-3.2-11b-vision-instruct : meta-llama/Llama-3.2-11B-Vision-Instruct
  llama-3.2-90b-vision-instruct : meta-llama/Llama-3.2-90B-Vision-Instruct

:raised_hand: Don't see the one you want? Check out this guide for step-by-step instructions on how to add a new model.

</details> <details> <summary><b>1. Prepare your finetuning data</b></summary>

Similar to LLaVA, we expect the data to be in a json file containing a list of dictionaries, where each dictionary is a sample.

[
    {
        "system_prompt": "You are a helpful assistant.",
        "video": "path/to/video1.mp4",
        "conversations": [
            {
                "from": "human",
                "value": "<video>What is this video about?"
            },
            {
                "from": "gpt",
                "value": "This video shows a baby crying."
            },
        ]
    }
]

The image and video token is assumed to be <image> and <video>. We adopt this format for its readability. Our dataset implementation is general enough to support variations within this format, e.g., multiple image/video inputs in a sample, text-only sample etc. For more details, see the dataset documentation and find how flexible this json file can be. There are also mutiple example json files under example_data for reference.

Besides this json file, the actual videos and images are by default assumed to be stored in their corresponding folders, and then the paths in the json file should be relative to the video/image root folder. Or the paths can simply be absolute paths.

:warning: If you have text-only entries in your training dataset: the training is likely to fail at some point if 1) your per_device_batch_size is 1, or 2) the number of text-only instances dominate the number of multi-modal instances. This is due to a limitation/bug of deepspeed. If neither of the above two conditions is met, no worries, we got you covered.

</details> <details> <summary><b>2. Perform finetuning</b></summary>

Modify the sample training bash script, example_video.sh or example_image.sh (there are no differences other than different model ID and dataset filepath), to specify arguments including the target model, data path, etc. There are comments that explain each argument's meaning. Then simply kick off the training by running the bash script bash example_scripts/example_video.sh or bash example_scripts/example_image.sh. Note that to exactly run the provided example_video.sh, you will need to download the video clips from ShareGPT4Video; see here for instructions.

:chart_with_upwards_trend:If you prefer graphical interface, simply run python webui.py to lauch the gradio interface for finetuning.

</details> <details> <summary><b>3. Inference with finetuned model</b></summary>

The key here is to correctly load the finetuned model, after that everything is the same as how you would do inference with the corresponding model from huggingface. Refer to the inference documentation for more details, including how to use merge_lora_weights.py to easily obtain a standalone model. Again you can refer to this colab for a complete example.

</details>

Acknowledgements

We want to thank the huggingface team for actively integrating newest models in the transformers library. Also, the example finetuning scripts (e.g., this, this, and this) made by HF staff, Niels Rogge and Raushan Turganbay, are very helpful and lay the foundation for this codebase. We also especially thank Raushan Turganbay for her generous discussions and feedbacks on this project.

The codebase borrows from, is inspired by, or builds upon the following code, repos, and/or libraries: LLaVA, Qwen, transformers, etc.

Citation

If you use lmms-finetune in your research/project, we'd be very happy if you could 1) give us a star, 2) share this repo with others, or 3) cite this codebase:

@software{Zhang_lmms-finetune,
author = {Zhang, Jingyang and Lin, Yueqian},
license = {Apache-2.0},
title = {{lmms-finetune}},
url = {https://github.com/zjysteven/lmms-finetune}
}