Home

Awesome

CogAgent

πŸ“— δΈ­ζ–‡η‰ˆREADME

πŸ’‘ The official GitHub repository of CogAgent is located at CogVLM & CogAgent Official Repository. Please visit this repository for more information about CogAgent, including introductions, code, and model checkpoints.

<div align="center"> <img src=assets/cogagent_function.jpg width=70% /> </div> <table> <tr> <td> <h2> CogVLM </h2> <p> πŸ“– Paper: <a href="https://arxiv.org/abs/2311.03079">CogVLM: Visual Expert for Pretrained Language Models</a></p> <p><b>CogVLM</b> is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion visual parameters and 7 billion language parameters, <b>supporting image understanding and multi-turn dialogue with a resolution of 490*490</b>.</p> <p><b>CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks</b>, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC.</p> </td> <td> <h2> CogAgent </h2> <p> πŸ“– Paper: <a href="https://arxiv.org/abs/2312.08914">CogAgent: A Visual Language Model for GUI Agents </a></p> <p><b>CogAgent</b> is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters, <b>supporting image understanding at a resolution of 1120*1120</b>. <b>On top of the capabilities of CogVLM, it further possesses GUI image Agent capabilities</b>.</p> <p> <b>CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks</b>, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. <b>It significantly surpasses existing models on GUI operation datasets</b> including AITW and Mind2Web.</p> </td> </tr> <tr> <td colspan="2" align="center"> <p>🌐 Web Demo for both CogVLM and CogAgent: <a href="http://36.103.203.44:7861">this link</a></p> </td> </tr> </table>

πŸ“” For more detailed usage information, please refer to: CogAgent&CogVLM technical documentation(Only Chinese)