Awesome
<p align="center" width="100%"> <img src="./assets/logo_v2.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;"> </p>Vigogne π¦: French Instruction-following and Chat Models
<p align="center"> <a href="https://github.com/bofenghuang/vigogne/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/bofenghuang/vigogne.svg"> </a> <!-- <a href="https://github.com/bofenghuang/vigogne/blob/main/LICENSE"> <img alt="License" src="https://img.shields.io/github/license/bofenghuang/vigogne.svg?color=green"> </a> --> <a href="https://github.com/bofenghuang/vigogne/blob/main/LICENSE"> <img alt="Code License" src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"> </a> <a href="https://github.com/bofenghuang/vigogne/blob/main/DATA_LICENSE"> <img alt="Data License" src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"> </a> <a href="https://huggingface.co/models?search=bofenghuang/vigogne"> <img alt="Models" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow.svg"> </a> <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb"> <img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"> </a> </p>The vigogne (French name for vicuΓ±a) is a South American camelid native to the Andes Mountains. It is closely related to the llama, alpaca, and guanaco.
Vigogne is a collection of powerful π«π· French large language models (LLMs) that are open-source and designed for instruction-following and chat purposes.
The main contributions of this project include:
- Open-sourced π¦ Vigogne models for French instruction-following and chat
- Efficient training code for fine-tuning LLMs such as LLaMA, Llama-2, Falcon, and FLAN-T5
- Generated, translated, and collected French instruction-following and dialogue datasets, along with the used scripts
- Inference code, Gradio demo, and support for deploying within various libraries such as π€ Transformers, llama.cpp, FastChat, and vLLM
- Support diverse application ecosystems, including LangChain.
π‘ The screencast below shows the current π¦ Vigogne-7B-Chat model running on Apple M1 Pro using 4GB of weights (no sped up).
Table of Contents
- Updates
- Installation
- π¦ Vigogne Models
- Inference and Deployment
- Application
- Data
- Training
- Bias, Risks, and Limitations
- Acknowledgements
- Citation
Updates
- [2023/08/18]: Included LangChain integration.
- [2023/08/17]: Introduced Vigogne-Chat-V2.0 models (blog).
- [2023/08/16]: Added support for serving using FastChat and vLLM.
- [2023/08/02]: Implemented generation script for Orca-style data.
- [2023/07/31]: Integrated FlashAttention support and implemented training example packing.
- [2023/07/20]: Introduced the latest Vigogne models built upon Llama-2.
- [2023/07/05]: Released Vigogne models based on Falcon and MPT, with commercial-friendly licenses.
- [2023/06/05]: Integrated QLoRA support for improved training efficiency.
- [2023/05/15]: Introduced Vigogne-Chat models with enhanced conversational capabilities.
- [2023/05/11]: Implemented Self-Chat data generation script for conversational data.
- [2023/05/11]: Introduced improved Vigogne-Instruct-V2 models, trained on more diverse data.
- [2023/05/11]: Released annotated seed tasks in French and generation script for Self-Instruct.
- [2023/04/03]: Expanded training scripts to incorporate seq2seq models.
- [2023/03/29]: Included deployment instructions using llama.cpp.
- [2023/03/26]: Released initial Vigogne-Instruct models trained on translated Stanford Alpaca data.
- [2023/03/26]: Open-sourced Vigogne project with optimized training scripts (LoRA, LLM.int8()).
Installation
- Clone this repository
git clone https://github.com/bofenghuang/vigogne.git
cd vigogne
- Install the package
# Install DeepSpeed if want to accelerate training with it
pip install deepspeed
# Install FlashAttention to further speed up training and reduce memory usage (essential for long sequences)
pip install packaging ninja
# For FlashAttention 1
# pip install --no-build-isolation flash-attn<2
# For FlashAttention 2
# Might takes 3-5 minutes on a 64-core machine
pip install --no-build-isolation flash-attn
pip install .
π¦ Vigogne Models
The fine-tuned π¦ Vigogne models come in two types: instruction-following models and chat models. The instruction-following models are optimized to generate concise and helpful responses to user instructions, similar to text-davinci-003
. Meanwhile, the chat models are designed for multi-turn dialogues, but they also perform well in instruction-following tasks, similar to gpt-3.5-turbo
.
More information can be found in vigogne/model.
Inference and Deployment
This repository offers multiple options for inference and deployment, including Google Colab notebooks, Gradio demos, FastChat, and vLLM. It also offers guidance on conducting experiments using llama.cpp on your personal computer.
More information can be found in vigogne/inference.
Application
This repository provides integration examples for incorporating Vigogne models into diverse application ecosystems, including LangChain.
More information can be found in vigogne/application.
Data
The Vigogne models were trained on a variety of datasets, including open-source datasets, ChatGPT-distillation datasets (self-instruct, self-chat, and orca-style data), and translated datasets.
More information can be found in vigogne/data.
Training
The Vigogne models were mostly instruction fine-tuned from other foundation models.
More information can be found in vigogne/training.
Bias, Risks, and Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
Acknowledgements
Our project builds upon the following open-source projects for further development. We would like to extend our sincerest gratitude to the individuals involved in the research and development of these projects.
- π€ Transformers and π€ PEFT
- LLaMA
- Stanford Alpaca
- Alpaca-LoRA by @tloen
- Baize
- llama.cpp by @ggerganov
- Axolotl by @OpenAccess-AI-Collective
Citation
If you find the model, data, and code in our project useful, please consider citing our work as follows:
@misc{vigogne,
author = {Bofeng Huang},
title = {Vigogne: French Instruction-following and Chat Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/bofenghuang/vigogne}},
}