Home

Awesome

<div align=center> <img src="https://github.com/RUC-GSAI/YuLan-Chat/blob/main/assets/YuLan-logo.jpg" width="400px"> <h1>YuLan: An Open-Source Large Language Model</h1> <a href="https://github.com/RUC-GSAI/YuLan-Chat/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue" alt="license"></a> <a href="https://arxiv.org/abs/2406.19853" target="_blank"><img src=https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv></a> <a href="https://huggingface.co/yulan-team"><img alt="Static Badge" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-blue?color=8A2BE2"></a> <a><img src="https://img.shields.io/github/stars/RUC-GSAI/YuLan-Chat"></a> </div>

YuLan-Chat models are chat-based large language models, which are developed by the researchers in GSAI, Renmin University of China (YuLan, which represents Yulan Magnolia, is the campus flower of Renmin University of China). The newest version is developed by pre-training from scratch, and supervised fine-tuning via curriculum learning with high-quality English and Chinese instructions and human preference data. The model has the following technical characteristics:

YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本从头完成了整个预训练过程,并采用课程学习技术基于中英文双语数据进行有监督微调,包括高质量指令和人类偏好数据。该版模型具有如下技术特点:

News

Model Zoo

Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the Usage section for more details.

Limitations: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information.

由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。

局限性:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。

ModelBackboneExtended VocabExtended LengthContinue PTSFTReleased Date
YuLan-Base-12BYuLan-Base-12B✅ 51,190✅ 4,0962024.7.1
YuLan-Chat-3-12BYuLan-Base-12B✅ 51,190✅ 4,0962024.7.1
YuLan-Chat-2-13BLLaMA2-13B✅ 51,190✅ 8,1922023.8.2
YuLan-LLaMA-2-13BLLaMA2-13B✅ 51,190✅ 8,1922023.8.2
YuLan-Chat-1-65B-v2LLaMA-65B✅ 51,190❌ 2,0482023.8.2
YuLan-Chat-1-13B-v1LLaMA-13B❌ 32,000❌ 2,0482023.6.8
YuLan-Chat-1-65B-v1LLaMA-65B❌ 32,000❌ 2,0482023.6.8

Evaluation

We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows.

我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。

MMLU

MMLU (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings.

MMLU是一个评估模型知识量的常用的英文基准测试集。

ModelSTEMSocial ScienceHumanitiesOthersAvg.
YuLan-Chat-1-13B-v139.657.842.657.649.4
YuLan-Chat-1-65B-v149.271.757.766.761.3
YuLan-Chat-1-65B-v246.367.956.963.958.7
LLaMA-2-13B44.664.253.962.256.2
FlagAlpha/Llama2-Chinese-13b-Chat44.463.251.660.655.0
Linly-AI/Chinese-LLaMA-2-13B-hf43.662.749.861.654.4
YuLan-LLaMA-2-13B42.961.550.458.653.4
YuLan-Chat-2-13B45.366.753.862.857.2
YuLan-Base-12B42.360.246.456.151.3
YuLan-Chat-3-12B45.564.351.861.355.7

C-Eval

C-Eval is a comprehensive Chinese evaluation suite for foundation models.

C-Eval是一个针对基石模型综合能力的中文基准测试集。

ModelSTEMSocial ScienceHumanitiesOthersAvg.Avg. (Hard)
YuLan-Chat-1-13B-v130.237.431.930.732.025.7
YuLan-Chat-1-65B-v137.746.136.838.039.231.1
YuLan-Chat-1-65B-v239.955.947.743.745.431.4
LLaMA-2-13B36.943.237.636.638.232.0
FlagAlpha/Llama2-Chinese-13b-Chat36.844.536.336.538.130.9
Linly-AI/Chinese-LLaMA-2-13B-hf33.744.836.636.537.027.7
YuLan-LLaMA-2-13B35.346.441.937.639.328.6
YuLan-Chat-2-13B38.949.745.040.842.632.2
YuLan-Base-12B42.057.647.241.546.032.6
YuLan-Chat-3-12B47.061.852.944.350.537.7

AGI-Eval-Gaokao

AGI-Eval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation.

AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。

ModelAvg.ChineseEnglishGeographyHistoryBiologyChemistryPhysicsMath-QAMath-Cloze
YuLan-Chat-1-13B-v129.232.163.134.725.126.229.025.526.50.9
YuLan-Chat-1-65B-v134.624.882.044.244.331.430.926.027.10.9
YuLan-Chat-1-65B-v237.931.480.450.856.633.329.032.024.40.8
LLaMA-2-13B32.727.272.236.243.026.232.430.026.20.9
FlagAlpha/Llama2-Chinese-13b-Chat31.626.470.635.238.728.128.029.525.62.5
Linly-AI/Chinese-LLaMA-2-13B-hf31.122.874.842.237.924.328.023.026.50.0
YuLan-LLaMA-2-13B34.225.270.343.248.530.029.531.028.51.7
YuLan-Chat-2-13B39.537.085.346.751.943.838.229.023.10.9
YuLan-Chat-3-12B43.531.368.353.360.943.834.827.528.20.9
YuLan-Chat-3-12B49.543.980.457.369.453.837.727.026.20.9

Usage

Environment Setting

conda create -n yulan python=3.10 -y
conda activate yulan

We suggest to install the pytorch and bitsandbytes according to their official guidance for better adapting to your environment, and we provide our applied versions as reference:

我们建议根据官方手册安装pytorch和bitsandbytes,此处提供我们使用的版本作为参考。

torch==1.13
bitsandbytes==0.39.0

Then, you can install other packages by the following instruction:

然后,安装其他所需的包。

pip install -r requirements.txt

Model Weights Recovering

  1. For YuLan-Chat-1-13B-v1, YuLan-Chat-1-65B-v1, and YuLan-Chat-1-65B-v2, as they are based on LLaMA, you should download LLaMA's original weights, and then add our released delta parameters into the original parameters to compose the final model parameters.

对于基于LLaMA的模型,请先下载LLaMA官方模型,然后将我们发布的参数差值合并到原始模型参数中以获得最终的参数。

python3 apply_delta.py \
    --base-model-path ./llama-13b/ \
    --tuned-model-path ./yulan-13b/ \
    --delta-path ./yulan-13b-delta
  1. For YuLan-LLaMA-2-13B and YuLan-Chat-2-13B, you can just download our released checkpoints and load their parameters via Huggingface Transformers.

对于基于LLaMA-2的模型,可以直接下载我们发布的模型权重,并使用Huggingface Transformers进行使用。

Import from Huggingface Transformers

As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA.

由于我们的模型与LLaMA具有相似的结构,可以使用与LLaMA相同的方法加载。

>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("yulan-team/YuLan-Chat-3-12b")
>>> model = AutoModelForCausalLM.from_pretrained("yulan-team/YuLan-Chat-3-12b").cuda()
>>> model = model.eval()
>>> input_text = "hello"
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
>>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=4096, truncation=True, return_attention_mask=True, add_special_tokens=True)
>>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 4096, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id}
>>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):])

Inference in Command Line

We provide the code for the inference of YuLan-Chat in command line.

我们提供命令行预测脚本。

python inference.py --model_path ~/pretrain-checkpoint/yulan-13b/

We also provide a quantization way for efficiently deploying YuLan-Chat. After quantization, YuLan-Chat can be loaded into a single GPU.

我们也提供了一种量化的方法以便于更轻量化地部署YuLan-Chat。经过量化后,模型可以被加载进单张GPU中。

YuLan-Chat (INT-8)GPU Consumption
13BRTX3090-24G
65BA100-80G
python inference.py --model_path ~/pretrain-checkpoint/yulan-13b/ --load_in_8bit

License

YuLan-Chat uses MIT License. All data and code in this project can only be used for academic purposes.

本项目使用MIT许可,所有的数据和代码仅供学术研究使用。

Contributors

Pre-trainingFine-tuning
Yutao Zhu (Lead), Kelong Mao, Wentong Chen, Yiding Sun, Yihan Wu, Qian Cao, Lei Zhang, Feng Wang, Qiangqiang RenKun Zhou (Lead), Yushuo Chen, Zhipeng Chen, Lei Wang, Yupeng Hou, Xincheng Pang, Xinyu Tang, Junyi Li, Yuhan Chen, Shufang Xie

Reference

Please kindly cite our work if it helps you.

如果我们的项目对您有帮助,请引用我们,谢谢!

@article{yulan,
  author       = {Yutao Zhu and 
                  Kun Zhou and 
                  Kelong Mao and 
                  Wentong Chen and 
                  Yiding Sun and 
                  Zhipeng Chen and 
                  Qian Cao and 
                  Yihan Wu and 
                  Yushuo Chen and 
                  Feng Wang and 
                  Lei Zhang and 
                  Junyi Li and 
                  Xiaolei Wang and 
                  Lei Wang and 
                  Beichen Zhang and 
                  Zican Dong and 
                  Xiaoxue Cheng and 
                  Yuhan Chen and 
                  Xinyu Tang and 
                  Yupeng Hou and 
                  Qiangqiang Ren and 
                  Xincheng Pang and 
                  Shufang Xie and 
                  Wayne Xin Zhao and 
                  Zhicheng Dou and 
                  Jiaxin Mao and 
                  Yankai Lin and 
                  Ruihua Song and 
                  Jun Xu and 
                  Xu Chen and 
                  Rui Yan and 
                  Zhewei Wei and 
                  Di Hu and 
                  Wenbing Huang and 
                  Ze-Feng Gao and 
                  Yueguo Chen and 
                  Weizheng Lu and 
                  Ji-Rong Wen},
  title        = {YuLan: An Open-source Large Language Model},
  journal      = {CoRR},
  volume       = {abs/2406.19853},
  year         = {2024},
  url          = {https://doi.org/10.48550/arXiv.2406.19853},
  doi          = {10.48550/ARXIV.2406.19853},
  eprinttype    = {arXiv},
  eprint       = {2406.19853}
}

YuLan-1

You can refer to our original branch for more detail about YuLan-Chat-1 and the instruction collection.

更多关于指令构造的细节,可以参考我们之前的分支。

Star History

<!-- <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=RUC-GSAI/YuLan-Chat&type=Date&theme=dark" /> --> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=RUC-GSAI/YuLan-Chat&type=Date" />