Home

Awesome

Molmo-Finetune

This repository contains a script for training Molmo Series with using HuggingFace.

However the model uploaded at the huggingfece hub is a sort of a preview version that has few limitations.

Note: This repo isn't maintained until the official version (Not the preview one) is released.

Other projects

[Phi3-Vision Finetuning]<br> [Qwen2-VL Finetuning]<br> [LLAMA3.2-Vision Finetuning]<br> [Pixtral Finetune]

Update

Table of Contents

Supported Features

Installation

Install the required packages using environment.yaml.

Using environment.yaml

conda env create -f environment.yaml
conda activate molmo

Dataset Preparation

The script requires a dataset formatted according to the LLaVA specification. The dataset should be a JSON file where each entry contains information about conversations and images. Ensure that the image paths in the dataset match the provided --image_folder.<br>

When using a multi-image dataset, the image tokens should all be <image>, and the image file names should have been in a list. Please see the example below and follow format your data.

<details> <summary>Example for single image dataset</summary>
[
  {
    "id": "000000033471",
    "image": "000000033471.jpg",
    "conversations": [
      {
        "from": "human",
        "value": "<image>\nWhat are the colors of the bus in the image?"
      },
      {
        "from": "gpt",
        "value": "The bus in the image is white and red."
      },
      {
        "from": "human",
        "value": "What feature can be seen on the back of the bus?"
      },
      {
        "from": "gpt",
        "value": "The back of the bus features an advertisement."
      },
      {
        "from": "human",
        "value": "Is the bus driving down the street or pulled off to the side?"
      },
      {
        "from": "gpt",
        "value": "The bus is driving down the street, which is crowded with people and other vehicles."
      }
    ]
  }
  ...
]
</details>

Training

Note: The model was updated to use bf16 or fp16 however, the output could be chagned compared to fp32.

To run the training script, use the following command:

Full Finetuning

bash scripts/finetune.sh

Finetune with LoRA

IMPORTANT: As the model is a preview version, it is a bit unstable when using LoRA. It is preferred to use full finetuning for now.

If you want to train only the language model with LoRA and perform full training for the vision model:

bash scripts/finetune_lora.sh

If you want to train both the language model and the vision model with LoRA:

bash scripts/finetune_lora_vision.sh

IMPORTANT: If you want to tune the wte with LoRA, You need to tune ff_out (that is lm_head in other models) together. NOTE: I couldn't exactly find the embedding layer that has weight so, the ff_out layer should be fine tuned temporarily.

<details> <summary>Training arguments</summary>

Note: The learning rate of vision_model should be 10x ~ 5x smaller than the language_model.

</details>

If you run out of vram, you can use zero3_offload instead of zero3. However, using zero3 is preferred.

Merge LoRA Weights

bash scripts/merge_lora.sh

Note: Remember to replace the paths in finetune.sh or finetune_lora.sh with your specific paths. (Also in merge_lora.sh when using LoRA.)

Issue for libcudnn error

Could not load library libcudnn_cnn_train.so.8. Error: /usr/local/cuda-12.1/lib/libcudnn_cnn_train.so.8: undefined symbol: _ZN5cudnn3cnn34layerNormFwd_execute_internal_implERKNS_7backend11VariantPackEP11CUstream_stRNS0_18LayerNormFwdParamsERKNS1_20NormForwardOperationEmb, version libcudnn_cnn_infer.so.8

You could run unset LD_LIBRARY_PATH for this error. You could see this issue

TODO

Known Issues

License

This project is licensed under the Apache-2.0 License. See the LICENSE file for details.

Citation

If you find this repository useful in your project, please consider giving a :star: and citing:

@misc{Molmo-Finetuning,
  author = {Yuwon Lee},
  title = {Molmo-Finetune},
  year = {2024},
  publisher = {GitHub},
  url = {https://github.com/2U1/Molmo-Finetune}
}

Acknowledgement

This project is based on