Home

Awesome

CogVLM2 & CogVLM2-Video

δΈ­ζ–‡η‰ˆREADME

<div align="center"> <img src=resources/logo.svg width="40%"/> </div> <p align="center"> πŸ‘‹ Join our <a href="resources/WECHAT.md" target="_blank">Wechat</a> Β· πŸ’‘Try CogVLM2 <a href="http://cogvlm2-online.cogviewai.cn:7861/" target="_blank">Online</a> πŸ’‘Try CogVLM2-Video <a href="http://cogvlm2-online.cogviewai.cn:7868/" target="_blank">Online</a> </p> <p align="center"> πŸ“Experience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">ZhipuAI Open Platform</a>. </p>

Recent updates

Model introduction

We launch a new generation of CogVLM2 series of models and open source two models based on Meta-Llama-3-8B-Instruct. Compared with the previous generation of CogVLM open source models, the CogVLM2 series of open source models have the following improvements:

  1. Significant improvements in many benchmarks such as TextVQA, DocVQA.
  2. Support 8K content length.
  3. Support image resolution up to 1344 * 1344.
  4. Provide an open source model version that supports both Chinese and English.

You can see the details of the CogVLM2 family of open source models in the table below:

Model Namecogvlm2-llama3-chat-19Bcogvlm2-llama3-chinese-chat-19Bcogvlm2-video-llama3-chatcogvlm2-video-llama3-base
Base ModelMeta-Llama-3-8B-InstructMeta-Llama-3-8B-InstructMeta-Llama-3-8B-InstructMeta-Llama-3-8B-Instruct
LanguageEnglishChinese, EnglishEnglishEnglish
TaskImage Understanding, Multi-turn Dialogue ModelImage Understanding, Multi-turn Dialogue ModelVideo Understanding, Single-turn Dialogue ModelVideo Understanding, Base Model, No Dialogue
Model LinkπŸ€— Huggingface πŸ€– ModelScope πŸ’« Wise ModelπŸ€— Huggingface πŸ€– ModelScope πŸ’« Wise ModelπŸ€— Huggingface πŸ€– ModelScopeπŸ€— Huggingface πŸ€– ModelScope
Experience LinkπŸ“™ Official PageπŸ“™ Official Page πŸ€– ModelScopeπŸ“™ Official Page πŸ€– ModelScope/
Int4 ModelπŸ€— Huggingface πŸ€– ModelScope πŸ’« Wise ModelπŸ€— Huggingface πŸ€– ModelScope πŸ’« Wise Model//
Text Length8K8K2K2K
Image Resolution1344 * 13441344 * 1344224 * 224 (Video, take the first 24 frames)224 * 224 (Video, take the average 24 frames)

Benchmark

Image Understand

Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:

ModelOpen SourceLLM SizeTextVQADocVQAChartQAOCRbenchVCR_EASYVCR_HARDMMMUMMVetMMBench
CogVLM1.1βœ…7B69.7-68.359073.934.637.352.065.8
LLaVA-1.5βœ…13B61.3--337--37.035.467.7
Mini-Geminiβœ…34B74.1-----48.059.380.6
LLaVA-NeXT-LLaMA3βœ…8B-78.269.5---41.7-72.1
LLaVA-NeXT-110Bβœ…110B-85.779.7---49.1-80.5
InternVL-1.5βœ…20B80.690.983.872014.72.046.855.482.3
QwenVL-Plus❌-78.991.478.1726--51.455.767.0
Claude3-Opus❌--89.380.869463.8537.859.451.763.3
Gemini Pro 1.5❌-73.586.581.3-62.7328.158.5--
GPT-4V❌-78.088.478.565652.0425.856.867.775.0
CogVLM2-LLaMA3βœ…8B84.292.381.075683.338.044.360.480.5
CogVLM2-LLaMA3-Chineseβœ…8B85.088.474.778079.925.142.860.578.9

All reviews were obtained without using any external OCR tools ("pixel only").

Video Understand

CogVLM2-Video achieves state-of-the-art performance on multiple video question answering tasks. The following diagram shows the performance of CogVLM2-Video on the MVBench, VideoChatGPT-Bench and Zero-shot VideoQA datasets (MSVD-QA, MSRVTT-QA, ActivityNet-QA). Where VCG-* refers to the VideoChatGPTBench, ZS-* refers to Zero-Shot VideoQA datasets and MV-* refers to main categories in the MVBench.

Quantitative Evaluation

Detailed performance

Performance on VideoChatGPT-Bench and Zero-shot VideoQA dataset:

ModelsVCG-AVGVCG-CIVCG-DOVCG-CUVCG-TUVCG-COZS-AVG
IG-VLM GPT4V3.173.402.803.612.893.1365.70
ST-LLM3.153.233.053.742.932.8162.90
ShareGPT4VideoN/AN/AN/AN/AN/AN/A46.50
VideoGPT+3.283.273.183.742.833.3961.20
VideoChat2_HD_mistral3.103.402.913.722.652.8457.70
PLLaVA-34B3.323.603.203.902.673.2568.10
CogVLM2-Video3.413.493.463.872.983.2366.60

Performance on MVBench dataset:

ModelsAVGAAACALAPASCOCIENERFAFPMAMCMDOEOIOSSTSCUA
IG-VLM GPT4V43.772.039.040.563.555.552.011.031.059.046.547.522.512.012.018.559.029.583.545.073.5
ST-LLM54.984.036.531.053.566.046.558.534.541.544.044.578.556.542.580.573.538.586.543.058.5
ShareGPT4Video51.279.535.541.539.549.546.551.528.539.040.025.575.062.550.582.554.532.584.551.054.5
VideoGPT+58.783.039.534.060.069.050.060.029.544.048.553.090.571.044.085.575.536.089.545.066.5
VideoChat2_HD_mistral62.379.560.087.550.068.593.571.536.545.049.587.040.076.092.053.062.045.536.044.069.5
PLLaVA-34B58.182.040.549.553.067.566.559.039.563.547.050.070.043.037.568.567.536.591.051.579.0
CogVLM2-Video62.385.541.531.565.579.558.577.028.542.554.057.091.573.048.091.078.036.091.547.068.5

Project structure

This open source repos will help developers to quickly get started with the basic calling methods of the CogVLM2 open source model, fine-tuning examples, OpenAI API format calling examples, etc. The specific project structure is as follows, you can click to enter the corresponding tutorial link:

basic_demo folder includes:

finetune_demo folder includes:

video_demo folder includes:

Useful Links

In addition to the official inference code, you can also refer to the following community-provided inference solutions:

License

This model is released under the CogVLM2 CogVLM2 LICENSE. For models built with Meta Llama 3, please also adhere to the LLAMA3_LICENSE.

Citation

If you find our work helpful, please consider citing the following papers

@article{hong2024cogvlm2,
  title={CogVLM2: Visual Language Models for Image and Video Understanding},
  author={Hong, Wenyi and Wang, Weihan and Ding, Ming and Yu, Wenmeng and Lv, Qingsong and Wang, Yan and Cheng, Yean and Huang, Shiyu and Ji, Junhui and Xue, Zhao and others},
  journal={arXiv preprint arXiv:2408.16500},
  year={2024}
}
@misc{wang2023cogvlm,
      title={CogVLM: Visual Expert for Pretrained Language Models}, 
      author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
      year={2023},
      eprint={2311.03079},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}