Home

Awesome

GLM-4

<p align="center"> 📄<a href="https://arxiv.org/pdf/2406.12793" target="_blank"> Report </a> • 🤗 <a href="https://huggingface.co/collections/THUDM/glm-4-665fcf188c414b03c2f7e3b7" target="_blank">HF Repo</a> • 🤖 <a href="https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat" target="_blank">ModelScope</a> • 🟣 <a href="https://wisemodel.cn/models/ZhipuAI/glm-4-9b-chat" target="_blank">WiseModel</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 👋 加入我们的 <a href="https://discord.gg/fK2dz4bg" target="_blank">Discord</a> 和 <a href="resources/WECHAT.md" target="_blank">微信</a> </p> <p align="center"> 📍在 <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">智谱AI开放平台</a> 体验和使用更大规模的 GLM 商业模型。 </p>

Read this in English

项目更新

模型介绍

GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。 在语义、数学、推理、代码和知识等多方面的数据集测评中, GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出超越 Llama-3-8B 的卓越性能。除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K 上下文)等高级功能。本代模型增加了多语言支持,支持包括日语,韩语,德语在内的 26 种语言。我们还推出了支持 1M 上下文长度(约 200 万中文字符)的 GLM-4-9B-Chat-1M 模型和基于 GLM-4-9B 的多模态模型 GLM-4V-9B。GLM-4V-9B 具备 1120 * 1120 高分辨率下的中英双语多轮对话能力,在中英文综合能力、感知推理、文字识别、图表理解等多方面多模态评测中,GLM-4V-9B 表现出超越 GPT-4-turbo-2024-04-09、Gemini 1.0 Pro、Qwen-VL-Max 和 Claude 3 Opus 的卓越性能。

Model List

ModelTypeSeq LengthTransformers VersionDownloadOnline Demo
GLM-4-9BBase8K4.44.0 - 4.45.0🤗 Huggingface<br> 🤖 ModelScope<br> 🟣 WiseModel/
GLM-4-9B-ChatChat128K>= 4.44.0🤗 Huggingface<br> 🤖 ModelScope<br> 🟣 WiseModel🤖 ModelScope CPU<br> 🤖 ModelScope vLLM
GLM-4-9B-Chat-HFChat128K>= 4.46.0🤗 Huggingface<br> 🤖 ModelScope🤖 ModelScope CPU<br> 🤖 ModelScope vLLM
GLM-4-9B-Chat-1MChat1M>= 4.44.0🤗 Huggingface<br> 🤖 ModelScope<br> 🟣 WiseModel/
GLM-4-9B-Chat-1M-HFChat1M>= 4.46.0🤗 Huggingface<br> 🤖 ModelScope/
GLM-4V-9BChat8K>= 4.46.0🤗 Huggingface<br> 🤖 ModelScope<br> 🟣 WiseModel🤖 ModelScope

评测结果

对话模型典型任务

ModelAlignBenchMT-BenchIFEvalMMLUC-EvalGSM8KMATHHumanEvalNaturalCodeBench
Llama-3-8B-Instruct6.408.0068.668.451.379.630.062.224.7
ChatGLM3-6B5.185.5028.161.469.072.325.758.511.3
GLM-4-9B-Chat7.018.3569.072.475.679.650.671.832.2

基座模型典型任务

ModelMMLUC-EvalGPQAGSM8KMATHHumanEval
Llama-3-8B66.651.2-45.8-33.5
Llama-3-8B-Instruct68.451.334.279.630.062.2
ChatGLM3-6B-Base61.469.026.872.325.758.5
GLM-4-9B74.777.134.384.030.470.1

由于 GLM-4-9B 在预训练过程中加入了部分数学、推理、代码相关的 instruction 数据,所以将 Llama-3-8B-Instruct 也列入比较范围。

长文本

在 1M 的上下文长度下进行大海捞针实验,结果如下:

needle

在 LongBench-Chat 上对长文本能力进行了进一步评测,结果如下:

<p align="center"> <img src="resources/longbench.png" alt="描述文字" style="display: block; margin: auto; width: 65%;"> </p>

多语言能力

在六个多语言数据集上对 GLM-4-9B-Chat 和 Llama-3-8B-Instruct 进行了测试,测试结果及数据集对应选取语言如下表

DatasetLlama-3-8B-InstructGLM-4-9B-ChatLanguages
M-MMLU49.656.6all
FLORES25.028.8ru, es, de, fr, it, pt, pl, ja, nl, ar, tr, cs, vi, fa, hu, el, ro, sv, uk, fi, ko, da, bg, no
MGSM54.065.3zh, en, bn, de, es, fr, ja, ru, sw, te, th
XWinograd61.773.1zh, en, fr, jp, ru, pt
XStoryCloze84.790.7zh, en, ar, es, eu, hi, id, my, ru, sw, te
XCOPA73.380.1zh, et, ht, id, it, qu, sw, ta, th, tr, vi

工具调用能力

我们在 Berkeley Function Calling Leaderboard 上进行了测试并得到了以下结果:

ModelOverall Acc.AST SummaryExec SummaryRelevance
Llama-3-8B-Instruct58.8859.2570.0145.83
gpt-4-turbo-2024-04-0981.2482.1478.6188.75
ChatGLM3-6B57.8862.1869.785.42
GLM-4-9B-Chat81.0080.2684.4087.92

多模态能力

GLM-4V-9B 是一个多模态语言模型,具备视觉理解能力,其相关经典任务的评测结果如下:

MMBench-EN-TestMMBench-CN-TestSEEDBench_IMGMMStarMMMUMMEHallusionBenchAI2DOCRBench
gpt-4o-2024-05-1383.482.177.163.969.22310.355.084.6736
gpt-4-turbo-2024-04-0981.080.273.056.061.72070.243.978.6656
gpt-4-1106-preview77.074.472.349.753.81771.546.575.9516
InternVL-Chat-V1.582.380.775.257.146.82189.647.480.6720
LLaVA-Next-Yi-34B81.179.075.751.648.82050.234.878.9574
Step-1V80.779.970.350.049.92206.448.479.2625
MiniCPM-Llama3-V2.577.673.872.351.845.82024.642.478.4725
Qwen-VL-Max77.675.772.749.552.02281.741.275.7684
Gemini 1.0 Pro73.674.370.738.649.02148.945.772.9680
Claude 3 Opus63.359.264.045.754.91586.837.870.6694
GLM-4V-9B81.179.476.858.747.22163.846.681.1786

快速调用

硬件配置和系统要求,请查看这里

使用以下方法快速调用 GLM-4-9B-Chat 语言模型

使用 transformers 后端进行推理:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import os

os.environ['CUDA_VISIBLE_DEVICES'] = '0' # 设置 GPU 编号,如果单机单卡指定一个,单机多卡指定多个 GPU 编号
MODEL_PATH = "THUDM/glm-4-9b-chat-hf"

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)

query = "你好"

inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}],
                                       add_generation_prompt=True,
                                       tokenize=True,
                                       return_tensors="pt",
                                       return_dict=True
                                       )

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    device_map="auto"
).eval()

gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1}
with torch.no_grad():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))

使用 vLLM 后端进行推理:

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

# GLM-4-9B-Chat-1M
# max_model_len, tp_size = 1048576, 4
# 如果遇见 OOM 现象,建议减少max_model_len,或者增加tp_size
max_model_len, tp_size = 131072, 1
model_name = "THUDM/glm-4-9b-chat-hf"
prompt = [{"role": "user", "content": "你好"}]

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
llm = LLM(
    model=model_name,
    tensor_parallel_size=tp_size,
    max_model_len=max_model_len,
    trust_remote_code=True,
    enforce_eager=True,
    # GLM-4-9B-Chat-1M 如果遇见 OOM 现象,建议开启下述参数
    # enable_chunked_prefill=True,
    # max_num_batched_tokens=8192
)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids)

inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
outputs = llm.generate(prompts=inputs, sampling_params=sampling_params)

print(outputs[0].outputs[0].text)

使用以下方法快速调用 GLM-4V-9B 多模态模型

使用 transformers 后端进行推理:

import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer
import os

os.environ['CUDA_VISIBLE_DEVICES'] = '0' # 设置 GPU 编号,如果单机单卡指定一个,单机多卡指定多个 GPU 编号
MODEL_PATH = "THUDM/glm-4v-9b"

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)

query = '描述这张图片'
image = Image.open("your image").convert('RGB')
inputs = tokenizer.apply_chat_template([{"role": "user", "image": image, "content": query}],
                                       add_generation_prompt=True, tokenize=True, return_tensors="pt",
                                       return_dict=True)  # chat mode

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    device_map="auto"
).eval()

gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1}
with torch.no_grad():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print(tokenizer.decode(outputs[0]))

使用 vLLM 后端进行推理:

from PIL import Image
from vllm import LLM, SamplingParams

model_name = "THUDM/glm-4v-9b"

llm = LLM(model=model_name,
          tensor_parallel_size=1,
          max_model_len=8192,
          trust_remote_code=True,
          enforce_eager=True)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.2,
                                 max_tokens=1024,
                                 stop_token_ids=stop_token_ids)

prompt = "What's the content of the image?"
image = Image.open("your image").convert('RGB')
inputs = {
    "prompt": prompt,
    "multi_modal_data": {
        "image": image
        },
        }
outputs = llm.generate(inputs, sampling_params=sampling_params)

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

完整项目列表

如果你想更进一步了解 GLM-4-9B 系列开源模型,本开源仓库通过以下内容为开发者提供基础的 GLM-4-9B 的使用和开发代码

友情链接

协议

请您严格遵循开源协议。

引用

如果你觉得我们的工作有帮助的话,请考虑引用下列论文。

@misc{glm2024chatglm,
      title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools}, 
      author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
      year={2024},
      eprint={2406.12793},
      archivePrefix={arXiv},
      primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
@misc{wang2023cogvlm,
      title={CogVLM: Visual Expert for Pretrained Language Models}, 
      author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
      year={2023},
      eprint={2311.03079},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}