Home

Awesome

<h1 align="center">πŸ§˜πŸ»β€β™‚οΈ KarmaVLM (η›Έη”Ÿ) </h1> <div align=center><img src ="./images/logo-github.png"/></div> <p align="center"> <a href="https://github.com/X-D-Lab/KarmaVLM"><img src="https://img.shields.io/badge/GitHub-24292e" alt="github"></a> <a href="https://huggingface.co/X-D-Lab"><img src="https://img.shields.io/badge/-HuggingFace-yellow" alt="HuggingFace"></a> <a href="https://modelscope.cn/organization/X-D-Lab"><img src="https://img.shields.io/badge/ModelScope-blueviolet" alt="modelscope"></a> <a href="https://openi.pcl.ac.cn/XD-LAB/KarmaVLM"><img src="https://img.shields.io/badge/-OpenI-337AFF" alt="OpenI"></a> <a href="https://WiseModel.cn/models/X-D%20Lab"><img src="https://img.shields.io/badge/WiseModel-561253" alt="WiseModel"></a> </p> <div align="center">

GitHub license GitHub Stars GitHub Forks GitHub Contributors

</div> <div align="center"> </div>

πŸ‘ Introduction

KarmaVLM is a family of high-efficiency and powerful visual language models (VLM) pre-trained with interleaved image-text data at scale, enabling content comprehension, recognition, and multi-round conversations about images.

πŸŽ‰ News

⚑️Features

KarmaVLM offers the following features:

πŸ”₯Model Zoo

NameDownloadLanguageVision EncoderLLMMMBenchLLaVA-Bench-WildScienceQATextVQA
KarmaVLM-Qwen1.5-0_5BπŸ€— / πŸ€– / ✑️ENopenai/clip-vit-large-patch14-336Qwen/Qwen1.5-0.5B53.540.443.2236.1
KarmaVLM-Qwen1.5-0_5B_SiglipπŸ€— / πŸ€– / ✑️ENgoogle/siglip-so400m-patch14-384Qwen/Qwen1.5-0.5B54.647.553.8144.98
KarmaVLM-Qwen1.5-4B_SiglipπŸ€— / πŸ€– / ✑️ENgoogle/siglip-so400m-patch14-384Qwen/Qwen1.5-4B62.350.474.9849.99
KarmaVLM-Qwen1.5-7BπŸ€— / πŸ€– / ✑️EN/CNopenai/clip-vit-large-patch14-336Qwen/Qwen1.5-7B69.957.976.5956.32
KarmaVLM-Qwen1.5-0_5B_Siglip_moeπŸ€— / πŸ€– / ✑️ENgoogle/siglip-so400m-patch14-384Qwen/Qwen1.5-0.5B55.847.553.8645.25

Basically, we have achieved SOTA among models of the same parameter order of magnitude, even beyond some models with larger parameters. More Benchmark evaluations are in progress!

πŸ‘¨β€πŸ’» Quick Start

Requirements and Installation

  git clone https://github.com/X-D-Lab/KarmaVLM.git
  cd KarmaVLM

  conda create -n karmavlm python=3.10 -y
  conda activate karmavlm

  pip install --upgrade pip  # enable PEP 660 support
  pip install modelscope
  pip install -e .
  pip install -e ".[train]"
  pip install flash-attn --no-build-isolation

you can download model:X-D-Lab/KarmaVLM-Qwen1.5-0_5B, and vision tower:openai/clip-vit-large-patch14 by run download.py

python download.py

You need to change the path in the download.py to your path, also, you need to change the path of the vision tower in the config.json to your local vision tower path.

  from modelscope import snapshot_download

  model_dir = snapshot_download('X-D-Lab/KarmaVLM-Qwen1.5-0_5B',cache_dir="your_path/KarmaVLM-Qwen1.5-0_5B/")   #where you need change
  model_dir = snapshot_download('thomas/clip-vit-large-patch14',cache_dir="your_path/clip-vit-large-patch14/")  #where you need change

🌏 Demo

  1. CLI Inference
    python -m llava.serve.cli \
        --model-path /path/to/karmavlm/model \
        --image-file /path/to/the/test/image
    
  2. Gradio Web UI

πŸ“‹ License

This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. The content of this project itself is licensed under the Apache license 2.0.

πŸ™‡β€ Architecture

We build our project based on LLaVA: Large Language and Vision Assistant.

πŸ’ͺ Contributors

<a href="https://github.com/X-D-Lab/KarmaVLM/graphs/contributors"> <img src="https://contrib.rocks/image?repo=X-D-Lab/KarmaVLM" /> </a>

πŸ™ Acknowledgement

@misc{liu2023llava,
      title={Visual Instruction Tuning}, 
      author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
      publisher={NeurIPS},
      year={2023},
}

@article{qwen,
  title={Qwen Technical Report},
  author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
  journal={arXiv preprint arXiv:2309.16609},
  year={2023}
}

@misc{2023internlm,
    title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities},
    author={InternLM Team},
    howpublished = {\url{https://github.com/InternLM/InternLM-techreport}},
    year={2023}
}

🌟 Star History

Star History Chart