Home

Awesome

Chinese Llama 2 7B

全部开源,完全可商用的中文版 Llama2 模型及中英文 SFT 数据集,输入格式严格遵循 llama-2-chat 格式,兼容适配所有针对原版 llama-2-chat 模型的优化。

Chinese LLaMA2 7B

基础演示

Base Demo

在线试玩

Talk is cheap, Show you the Demo.

最新更新

资源下载

我们使用了中英文 SFT 数据集,数据量 1000 万。

快速测试

from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer

model_path = "LinkSoul/Chinese-Llama-2-7b"

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_path).half().cuda()
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

instruction = """[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.

            If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{} [/INST]"""

prompt = instruction.format("用中文回答,When is the best time to visit Beijing, and do you have any suggestions for me?")
generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=4096, streamer=streamer)

Docker

你可以使用仓库中的 Dockerfile,来快速制作基于 Nvidia 最新版本的 nvcr.io/nvidia/pytorch:23.06-py3 基础镜像,在任何地方使用容器来运行中文的 LLaMA2 模型应用。

docker build -t linksoul/chinese-llama2-chat .

镜像构建完毕,使用命令运行镜像即可:

docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -v `pwd`/LinkSoul:/app/LinkSoul -p 7860:7860 linksoul/chinese-llama2-chat

GGML / Llama.cpp

想要在 CPU 环境运行 LLaMA2 模型么?使用下面的方法吧。

python3 convert.py /app/LinkSoul/Chinese-Llama-2-7b/ --outfile /app/LinkSoul/Chinese-Llama-2-7b-ggml.bin
./quantize /app/LinkSoul/Chinese-Llama-2-7b-ggml.bin /app/LinkSoul/Chinese-Llama-2-7b-ggml-q4.bin q4_0

量化配置的定义:

转自: https://www.reddit.com/r/LocalLLaMA/comments/139yt87/notable_differences_between_q4_2_and_q5_1/

API部署

首先需要安装额外的依赖 pip install fastapi uvicorn,然后运行仓库中的 api.py

python api.py

默认部署在本地的 8000 端口,通过 POST 方法进行调用

curl -X POST "http://127.0.0.1:8000" \
     -H 'Content-Type: application/json' \
     -d '{"prompt": "你好", "history": []}'

得到的返回值为

{
  "response":" 你好!我是一个人工智能语言模型,可以回答你的问题和进行对话。请问你有什么需要帮助的吗? ",
  "history":[["<<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\n            If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n你好"," 你好!我是一个人工智能语言模型,可以回答你的问题和进行对话。请问你有什么需要帮助的吗? "]],
  "status":200,
  "time":"2023-08-01 09:22:16"
}

如何训练

DATASET="LinkSoul/instruction_merge_set"

DATA_CACHE_PATH="hf_datasets_cache"
MODEL_PATH="/PATH/TO/TRANSFORMERS/VERSION/LLAMA2"

output_dir="./checkpoints_llama2"

torchrun --nnodes=1 --node_rank=0 --nproc_per_node=8 \
    --master_port=25003 \
        train.py \
        --model_name_or_path ${MODEL_PATH} \
        --data_path ${DATASET} \
        --data_cache_path ${DATA_CACHE_PATH} \
        --bf16 True \
        --output_dir ${output_dir} \
        --num_train_epochs 1 \
        --per_device_train_batch_size 4 \
        --per_device_eval_batch_size 4 \
        --gradient_accumulation_steps 1 \
        --evaluation_strategy 'no' \
        --save_strategy 'steps' \
        --save_steps 1200 \
        --save_total_limit 5 \
        --learning_rate 2e-5 \
        --weight_decay 0. \
        --warmup_ratio 0.03 \
        --lr_scheduler_type cosine \
        --logging_steps 1 \
        --fsdp 'full_shard auto_wrap' \
        --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
        --tf32 True \
        --model_max_length 4096 \
        --gradient_checkpointing True

相关项目

项目协议

Apache-2.0 license

微信交流群

<img src=".github/QRcode.jpg" alt="微信交流群" width="300"/>