Home

Awesome

CogVLM & CogAgent

📗 中文版README

🌟 Jump to detailed introduction: Introduction to CogVLM, 🆕 Introduction to CogAgent

📔 For more detailed usage information, please refer to: CogVLM & CogAgent's technical documentation (in Chinese)

<table> <tr> <td> <h2> CogVLM </h2> <p> 📖 Paper: <a href="https://arxiv.org/abs/2311.03079">CogVLM: Visual Expert for Pretrained Language Models</a></p> <p><b>CogVLM</b> is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion visual parameters and 7 billion language parameters, <b>supporting image understanding and multi-turn dialogue with a resolution of 490*490</b>.</p> <p><b>CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks</b>, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC.</p> </td> <td> <h2> CogAgent </h2> <p> 📖 Paper: <a href="https://arxiv.org/abs/2312.08914">CogAgent: A Visual Language Model for GUI Agents </a></p> <p><b>CogAgent</b> is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters, <b>supporting image understanding at a resolution of 1120*1120</b>. <b>On top of the capabilities of CogVLM, it further possesses GUI image Agent capabilities</b>.</p> <p> <b>CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks</b>, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. <b>It significantly surpasses existing models on GUI operation datasets</b> including AITW and Mind2Web.</p> </td> </tr> <tr> <td colspan="2" align="center"> <p>🌐 Web Demo for both CogVLM2: <a href="http://36.103.203.44:7861">this link</a></p> </td> </tr> </table>

Table of Contents

Release

Get Started

Option 1: Inference Using Web Demo.

If you need to use Agent and Grounding functions, please refer to Cookbook - Task Prompts

Option 2:Deploy CogVLM / CogAgent by yourself

We support two GUIs for model inference, CLI and web demo . If you want to use it in your python code, it is easy to modify the CLI scripts for your case.

First, we need to install the dependencies.

# CUDA >= 11.8
pip install -r requirements.txt
python -m spacy download en_core_web_sm

All code for inference is located under the basic_demo/ directory. Please switch to this directory first before proceeding with further operations.

Situation 2.1 CLI (SAT version)

Run CLI demo via:

# CogAgent
python cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16  --stream_chat
python cli_demo_sat.py --from_pretrained cogagent-vqa --version chat_old --bf16  --stream_chat

# CogVLM
python cli_demo_sat.py --from_pretrained cogvlm-chat --version chat_old --bf16  --stream_chat
python cli_demo_sat.py --from_pretrained cogvlm-grounding-generalist --version base --bf16  --stream_chat

The program will automatically download the sat model and interact in the command line. You can generate replies by entering instructions and pressing enter. Enter clear to clear the conversation history and stop to stop the program.

We also support model parallel inference, which splits model to multiple (2/4/8) GPUs. --nproc-per-node=[n] in the following command controls the number of used GPUs.

torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16

Situation 2.2 CLI (Huggingface version)

Run CLI demo via:

# CogAgent
python cli_demo_hf.py --from_pretrained THUDM/cogagent-chat-hf --bf16
python cli_demo_hf.py --from_pretrained THUDM/cogagent-vqa-hf --bf16

# CogVLM
python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --bf16
python cli_demo_hf.py --from_pretrained THUDM/cogvlm-grounding-generalist-hf --bf16

Situation 2.3 Web Demo

We also offer a local web demo based on Gradio. First, install Gradio by running: pip install gradio. Then download and enter this repository and run web_demo.py. See the next section for detailed usage:

python web_demo.py --from_pretrained cogagent-chat --version chat --bf16
python web_demo.py --from_pretrained cogagent-vqa --version chat_old --bf16
python web_demo.py --from_pretrained cogvlm-chat-v1.1 --version chat_old --bf16
python web_demo.py --from_pretrained cogvlm-grounding-generalist --version base --bf16

The GUI of the web demo looks like:

<div align="center"> <img src=assets/web_demo-min.png width=70% /> </div>

Option 3:Finetuning CogAgent / CogVLM

You may want to use CogVLM in your own task, which needs a different output style or domain knowledge. All code for finetuning is located under the finetune_demo/ directory.

We here provide a finetuning example for Captcha Recognition using lora.

  1. Start by downloading the Captcha Images dataset. Once downloaded, extract the contents of the ZIP file.

  2. To create a train/validation/test split in the ratio of 80/5/15, execute the following:

    python utils/split_dataset.py
    
  3. Start the fine-tuning process with this command:

    bash finetune_demo/finetune_(cogagent/cogvlm)_lora.sh
    
  4. Merge the model to model_parallel_size=1: (replace the 4 below with your training MP_SIZE)

    torchrun --standalone --nnodes=1 --nproc-per-node=4 utils/merge_model.py --version base --bf16 --from_pretrained ./checkpoints/merged_lora_(cogagent/cogvlm490/cogvlm224)
    
  5. Evaluate the performance of your model.

    bash finetune_demo/evaluate_(cogagent/cogvlm).sh
    

Option 4: OpenAI Vision format

We provide the same API examples as GPT-4V, which you can view in openai_demo.

  1. First, start the node
python openai_demo/openai_api.py
  1. Next, run the request example node, which is an example of a continuous dialogue
python openai_demo/openai_api_request.py
  1. You will get output similar to the following
This image showcases a tranquil natural scene with a wooden pathway leading through a field of lush green grass. In the distance, there are trees and some scattered structures, possibly houses or small buildings. The sky is clear with a few scattered clouds, suggesting a bright and sunny day.

Hardware requirement

Model checkpoints

If you run the basic_demo/cli_demo*.py from the code repository, it will automatically download SAT or Hugging Face weights. Alternatively, you can choose to manually download the necessary weights.

c

Introduction to CogVLM

<div align="center"> <img src=assets/metrics-min.png width=50% /> </div> <details> <summary>Click to view results on MM-VET, POPE, TouchStone. </summary> <table> <tr> <td>Method</td> <td>LLM</td> <td>MM-VET</td> <td>POPE(adversarial)</td> <td>TouchStone</td> </tr> <tr> <td>BLIP-2</td> <td>Vicuna-13B</td> <td>22.4</td> <td>-</td> <td>-</td> </tr> <tr> <td>Otter</td> <td>MPT-7B</td> <td>24.7</td> <td>-</td> <td>-</td> </tr> <tr> <td>MiniGPT4</td> <td>Vicuna-13B</td> <td>24.4</td> <td>70.4</td> <td>531.7</td> </tr> <tr> <td>InstructBLIP</td> <td>Vicuna-13B</td> <td>25.6</td> <td>77.3</td> <td>552.4</td> </tr> <tr> <td>LLaMA-Adapter v2</td> <td>LLaMA-7B</td> <td>31.4</td> <td>-</td> <td>590.1</td> </tr> <tr> <td>LLaVA</td> <td>LLaMA2-7B</td> <td>28.1</td> <td>66.3</td> <td>602.7</td> </tr> <tr> <td>mPLUG-Owl</td> <td>LLaMA-7B</td> <td>-</td> <td>66.8</td> <td>605.4</td> </tr> <tr> <td>LLaVA-1.5</td> <td>Vicuna-13B</td> <td>36.3</td> <td>84.5</td> <td>-</td> </tr> <tr> <td>Emu</td> <td>LLaMA-13B</td> <td>36.3</td> <td>-</td> <td>-</td> </tr> <tr> <td>Qwen-VL-Chat</td> <td>-</td> <td>-</td> <td>-</td> <td>645.2</td> </tr> <tr> <td>DreamLLM</td> <td>Vicuna-7B</td> <td>35.9</td> <td>76.5</td> <td>-</td> </tr> <tr> <td>CogVLM</td> <td>Vicuna-7B</td> <td> <b>52.8</b> </td> <td><b>87.6</b></td> <td><b>742.0</b></td> </tr> </table> </details> <details> <summary>Click to view results of cogvlm-grounding-generalist-v1.1. </summary> <table> <tr> <td></td> <td>RefCOCO</td> <td></td> <td></td> <td>RefCOCO+</td> <td></td> <td></td> <td>RefCOCOg</td> <td></td> <td>Visual7W</td> </tr> <tr> <td></td> <td>val</td> <td>testA</td> <td>testB</td> <td>val</td> <td>testA</td> <td>testB</td> <td>val</td> <td>test</td> <td>test</td> </tr> <tr> <td>cogvim-grounding-generalist</td> <td>92.51</td> <td>93.95</td> <td>88.73</td> <td>87.52</td> <td>91.81</td> <td>81.43</td> <td>89.46</td> <td>90.09</td> <td>90.96</td> </tr> <tr> <td>cogvim-grounding-generalist-v1.1</td> <td>**92.76**</td> <td>**94.75**</td> <td>**88.99**</td> <td>**88.68**</td> <td>**92.91**</td> <td>**83.39**</td> <td>**89.75**</td> <td>**90.79**</td> <td>**91.05**</td> </tr> </table> </details>

Examples

<!-- CogVLM is powerful for answering various types of visual questions, including **Detailed Description & Visual Question Answering**, **Complex Counting**, **Visual Math Problem Solving**, **OCR-Free Reasonging**, **OCR-Free Visual Question Answering**, **World Knowledge**, **Referring Expression Comprehension**, **Programming with Visual Input**, **Grounding with Caption**, **Grounding Visual Question Answering**, etc. --> <div align="center"> <img src=assets/pear_grounding.png width=50% /> </div> <br> <div align="center"> <img src=assets/compare-min.png width=50% /> </div> <!-- ![compare](assets/compare.png) --> <br> <details> <summary>Click to expand more examples.</summary>

Chat Examples

</details>

Introduction to CogAgent

CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters

CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets such as AITW and Mind2Web.

In addition to all the features already present in CogVLM (visual multi-round dialogue, visual grounding), CogAgent:

  1. Supports higher resolution visual input and dialogue question-answering. It supports ultra-high-resolution image inputs of 1120x1120.

  2. Possesses the capabilities of a visual Agent, being able to return a plan, next action, and specific operations with coordinates for any given task on any GUI screenshot.

  3. Enhanced GUI-related question-answering capabilities, allowing it to handle questions about any GUI screenshot, such as web pages, PC apps, mobile applications, etc.

  4. Enhanced capabilities in OCR-related tasks through improved pre-training and fine-tuning.

<div align="center"> <img src=assets/cogagent_function.jpg width=60% /> </div>

GUI Agent Examples

<div align="center"> <img src=assets/cogagent_main_demo.jpg width=90% /> </div>

Cookbook

Task Prompts

  1. General Multi-Round Dialogue: Say whatever you want.

  2. GUI Agent Task: Use the Agent template and replace <TASK> with the task instruction enclosed in double quotes. This query can make CogAgent infer Plan and Next Action. If adding (with grounding) at the end of the query, the model will return a formalized action representation with coordinates.

For example, to ask the model how to complete the task "Search for CogVLM" on a current GUI screenshot, follow these steps:

  1. Randomly select a template from the Agent template. Here, we choose What steps do I need to take to <TASK>?.

  2. Replace <TASK> with the task instruction enclosed in double quotes, for example, What steps do I need to take to "Search for CogVLM"? . Inputting this to the model yields:

Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources.

Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it.

  1. If adding (with grounding) at the end, i.e. changing the input to What steps do I need to take to "Search for CogVLM"?(with grounding), the output of CogAgent would be:

Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources.

Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it. Grounded Operation:[combobox] Search -> TYPE: CogVLM at the box [[212,498,787,564]]

Tip: For GUI Agent tasks, it is recommended to conduct only single-round dialogues for each image for better results.

  1. Visual Grounding. Three modes of grounding are supported:

    • Image description with grounding coordinates (bounding box). Use any template from caption_with_box template as model input. For example:

    Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?

    • Returning grounding coordinates (bounding box) based on the description of objects. Use any template from caption2box template, replacing <expr> with the object's description. For example:

    Can you point out children in blue T-shirts in the image and provide the bounding boxes of their location?

    • Providing a description based on bounding box coordinates. Use a template from box2caption template, replacing <objs> with the position coordinates. For example:

    Tell me what you see within the designated area [[086,540,400,760]] in the picture.

Format of coordination: The bounding box coordinates in the model's input and output use the format [[x1, y1, x2, y2]], with the origin at the top left corner, the x-axis to the right, and the y-axis downward. (x1, y1) and (x2, y2) are the top-left and bottom-right corners, respectively, with values as relative coordinates multiplied by 1000 (prefixed with zeros to three digits).

Which --version to use

Due to differences in model functionalities, different model versions may have distinct --version specifications for the text processor, meaning the format of the prompts used varies.

model name--version
cogagent-chatchat
cogagent-vqachat_old
cogvlm-chatchat_old
cogvlm-chat-v1.1chat_old
cogvlm-grounding-generalistbase
cogvlm-base-224base
cogvlm-base-490base

FAQ

License

The code in this repository is open source under the Apache-2.0 license, while the use of the CogVLM model weights must comply with the Model License.

Citation & Acknowledgements

If you find our work helpful, please consider citing the following papers

@misc{wang2023cogvlm,
      title={CogVLM: Visual Expert for Pretrained Language Models}, 
      author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
      year={2023},
      eprint={2311.03079},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{hong2023cogagent,
      title={CogAgent: A Visual Language Model for GUI Agents}, 
      author={Wenyi Hong and Weihan Wang and Qingsong Lv and Jiazheng Xu and Wenmeng Yu and Junhui Ji and Yan Wang and Zihan Wang and Yuxiao Dong and Ming Ding and Jie Tang},
      year={2023},
      eprint={2312.08914},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

In the instruction fine-tuning phase of the CogVLM, there are some English image-text data from the MiniGPT-4, LLAVA, LRV-Instruction, LLaVAR and Shikra projects, as well as many classic cross-modal work datasets. We sincerely thank them for their contributions.