Home

Awesome

<p align="center"> <img src=".github/stochastic_logo_light.svg#gh-light-mode-only" width="250" alt="Stochastic.ai"/> <img src=".github/stochastic_logo_dark.svg#gh-dark-mode-only" width="250" alt="Stochastic.ai"/> </p> <h3 align="center">Build, modify, and control your own personalized LLMs</h3> <p align="center"> <a href="https://pypi.org/project/xturing/"> <img src="https://img.shields.io/pypi/v/xturing?style=for-the-badge" /> </a> <a href="https://xturing.stochastic.ai/"> <img src="https://img.shields.io/badge/Documentation-blue?logo=GitBook&logoColor=white&style=for-the-badge" /> </a> <a href="https://discord.gg/TgHXuSJEk6"> <img src="https://img.shields.io/badge/Chat-FFFFFF?logo=discord&style=for-the-badge"/> </a> </p> <br>

xTuring provides fast, efficient and simple fine-tuning of open-source LLMs, such as Mistral, LLaMA, GPT-J, and more. By providing an easy-to-use interface for fine-tuning LLMs to your own data and application, xTuring makes it simple to build, modify, and control LLMs. The entire process can be done inside your computer or in your private cloud, ensuring data privacy and security.

With xTuring you can,

<br>

โš™๏ธ Installation

pip install xturing
<br>

๐Ÿš€ Quickstart

from xturing.datasets import InstructionDataset
from xturing.models import BaseModel

# Load the dataset
instruction_dataset = InstructionDataset("./examples/models/llama/alpaca_data")

# Initialize the model
model = BaseModel.create("llama_lora")

# Finetune the model
model.finetune(dataset=instruction_dataset)

# Perform inference
output = model.generate(texts=["Why LLM models are becoming so important?"])

print("Generated output by the model: {}".format(output))

You can find the data folder here.

<br>

๐ŸŒŸ What's new?

We are excited to announce the latest enhancements to our xTuring library:

  1. LLaMA 2 integration - You can use and fine-tune the LLaMA 2 model in different configurations: off-the-shelf, off-the-shelf with INT8 precision, LoRA fine-tuning, LoRA fine-tuning with INT8 precision and LoRA fine-tuning with INT4 precision using the GenericModel wrapper and/or you can use the Llama2 class from xturing.models to test and finetune the model.
from xturing.models import Llama2
model = Llama2()

## or
from xturing.models import BaseModel
model = BaseModel.create('llama2')

  1. Evaluation - Now you can evaluate any Causal Language Model on any dataset. The metrics currently supported is perplexity.
# Make the necessary imports
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel

# Load the desired dataset
dataset = InstructionDataset('../llama/alpaca_data')

# Load the desired model
model = BaseModel.create('gpt2')

# Run the Evaluation of the model on the dataset
result = model.evaluate(dataset)

# Print the result
print(f"Perplexity of the evalution: {result}")

  1. INT4 Precision - You can now use and fine-tune any LLM with INT4 Precision using GenericLoraKbitModel.
# Make the necessary imports
from xturing.datasets import InstructionDataset
from xturing.models import GenericLoraKbitModel

# Load the desired dataset
dataset = InstructionDataset('../llama/alpaca_data')

# Load the desired model for INT4 bit fine-tuning
model = GenericLoraKbitModel('tiiuae/falcon-7b')

# Run the fine-tuning
model.finetune(dataset)
  1. CPU inference - The CPU, including laptop CPUs, is now fully equipped to handle LLM inference. We integrated Intelยฎ Extension for Transformers to conserve memory by compressing the model with weight-only quantization algorithms and accelerate the inference by leveraging its highly optimized kernel on Intel platforms.
# Make the necessary imports
from xturing.models import BaseModel

# Initializes the model: quantize the model with weight-only algorithms
# and replace the linear with Itrex's qbits_linear kernel
model = BaseModel.create("llama2_int8")

# Once the model has been quantized, do inferences directly
output = model.generate(texts=["Why LLM models are becoming so important?"])
print(output)
  1. Batch integration - By tweaking the 'batch_size' in the .generate() and .evaluate() functions, you can expedite results. Using a 'batch_size' greater than 1 typically enhances processing efficiency.
# Make the necessary imports
from xturing.datasets import InstructionDataset
from xturing.models import GenericLoraKbitModel

# Load the desired dataset
dataset = InstructionDataset('../llama/alpaca_data')

# Load the desired model for INT4 bit fine-tuning
model = GenericLoraKbitModel('tiiuae/falcon-7b')

# Generate outputs on desired prompts
outputs = model.generate(dataset = dataset, batch_size=10)

An exploration of the Llama LoRA INT4 working example is recommended for an understanding of its application.

For an extended insight, consider examining the GenericModel working example available in the repository.

<br>

CLI playground

<img src=".github/cli-playground.gif" width="80%" style="margin: 0 1%;"/>
$ xturing chat -m "<path-to-model-folder>"

UI playground

<img src=".github/ui-playground2.gif" width="80%" style="margin: 0 1%;"/>
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel
from xturing.ui import Playground

dataset = InstructionDataset("./alpaca_data")
model = BaseModel.create("<model_name>")

model.finetune(dataset=dataset)

model.save("llama_lora_finetuned")

Playground().launch() ## launches localhost UI

<br>

๐Ÿ“š Tutorials

<br>

๐Ÿ“Š Performance

Here is a comparison for the performance of different fine-tuning techniques on the LLaMA 7B model. We use the Alpaca dataset for fine-tuning. The dataset contains 52K instructions.

Hardware:

4xA100 40GB GPU, 335GB CPU RAM

Fine-tuning parameters:

{
  'maximum sequence length': 512,
  'batch size': 1,
}
LLaMA-7BDeepSpeed + CPU OffloadingLoRA + DeepSpeedLoRA + DeepSpeed + CPU Offloading
GPU33.5 GB23.7 GB21.9 GB
CPU190 GB10.2 GB14.9 GB
Time/epoch21 hours20 mins20 mins

Contribute to this by submitting your performance results on other GPUs by creating an issue with your hardware specifications, memory consumption and time per epoch.

<br>

๐Ÿ“Ž Fine-tuned model checkpoints

We have already fine-tuned some models that you can use as your base or start playing with. Here is how you would load them:

from xturing.models import BaseModel
model = BaseModel.load("x/distilgpt2_lora_finetuned_alpaca")
modeldatasetPath
DistilGPT-2 LoRAalpacax/distilgpt2_lora_finetuned_alpaca
LLaMA LoRAalpacax/llama_lora_finetuned_alpaca
<br>

Supported Models

Below is a list of all the supported models via BaseModel class of xTuring and their corresponding keys to load them.

ModelKey
Bloombloom
Cerebrascerebras
DistilGPT-2distilgpt2
Falcon-7Bfalcon
Galacticagalactica
GPT-Jgptj
GPT-2gpt2
LlaMAllama
LlaMA2llama2
OPT-1.3Bopt

The above mentioned are the base variants of the LLMs. Below are the templates to get their LoRA, INT8, INT8 + LoRA and INT4 + LoRA versions.

VersionTemplate
LoRA<model_key>_lora
INT8<model_key>_int8
INT8 + LoRA<model_key>_lora_int8

** In order to load any model's INT4+LoRA version, you will need to make use of GenericLoraKbitModel class from xturing.models. Below is how to use it:

model = GenericLoraKbitModel('<model_path>')

The model_path can be replaced with you local directory or any HuggingFace library model like facebook/opt-1.3b.

๐Ÿ“ˆ Roadmap

<br>

๐Ÿค Help and Support

If you have any questions, you can create an issue on this repository.

You can also join our Discord server and start a discussion in the #xturing channel.

<br>

๐Ÿ“ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

<br>

๐ŸŒŽ Contributing

As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features and better documentation. Please read our contributing guide to learn how you can get involved.