Home

Awesome

OpenLLaMA-Chinese

<div align=center><img src="media/logo.webp" width = "200" height = "200" /></div> <div align=center> <img src="https://img.shields.io/badge/Code--License-Apache2-green"/> <img src="https://img.shields.io/badge/Data--License-CC%20By%20NC%204.0-orange"/> <img src="https://img.shields.io/badge/Model--License-Apache2-blue"/> </div>

OpenLLaMA-Chinese is a 100% free Chinese large language model, and can be utilized for both non-commercial and commercial purposes.

OpenLLaMA-Chinese is built on OpenLLaMA, which is a permissively licensed open-source reproduction of Meta AI's LLaMA 7B and 13B models, trained on the RedPajama dataset. OpenLLaMA also includes a smaller 3B variant of the LLaMA model. We have conducted fine-tuning on Chinese and English instructions using the OpenLLaMA base models and have made our weights publicly available.

News

[2023/06/29] We released the openllama 13b model by using evol intructions.

[2023/06/24] We use evol intructions from WizardLM to finetune the openllama 7B, the 13B Model will be avaliable in next week!.

Evol Instruction Examples

Evol Instructions Fine-tuning Weights:

Chinese Instructions Fine-tuning Weights:

English Instructions Fine-tuning Weights:

Chinese+English Instructions Fine-tuning Weights:

Data

For Chinese fine-tuning, we utilized the alpaca_data_zh_51k.json from the Chinese-LLaMA-Alpaca project.

For English fine-tuning, we employed the alpaca_data.json from the StanfordAlpaca project.

For fine-tuning with both English and Chinese instructions, we used data from both sources.

Usage

We modified the generate code from LLaMA-X.

To use the PyTorch inference code, follow these steps:

  1. Download the weights and update the base_model path in inference/gen_torch.py.
  2. Run the following command:
python inference/gen_torch.py

Pretraining and Finetuning

FittenTech offers LLMs pretraining and fine-tuning services. For more details, please visit https://llm.fittentech.com/.

Acknowledgments

We would like to express our gratitude to the developers of the following open-source projects, as our project builds upon their work:

License

We adopt the Apache License, following OpenLLaMA's license.