Home

Awesome

<div align="center">

<a href="https://unsloth.ai"><picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20logo%20white%20text.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20logo%20black%20text.png"> <img alt="unsloth logo" src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20logo%20black%20text.png" height="110" style="max-width: 100%;"> </picture></a>

<a href="https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing"><img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/start free finetune button.png" height="48"></a> <a href="https://discord.gg/u54VK8m8tk"><img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord button.png" height="48"></a> <a href="https://ko-fi.com/unsloth"><img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy me a coffee button.png" height="48"></a>

Finetune Llama 3, Mistral & Gemma 2-5x faster with 80% less memory!

</div>

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supportsFree NotebooksPerformanceMemory use
Llama 3 (8B)▶️ Start for free2x faster60% less
Mistral (7B)▶️ Start for free2.2x faster73% less
Gemma (7B)▶️ Start for free2.4x faster71% less
ORPO▶️ Start for free1.9x faster43% less
DPO Zephyr▶️ Start for free1.9x faster43% less
Phi-3 (3.8B)▶️ Start for free2x faster50% less
TinyLlama▶️ Start for free3.9x faster74% less

🦥 Unsloth.ai News

model = FastLanguageModel.get_peft_model(
    model,
    use_gradient_checkpointing = "unsloth", # <<<<<<<
)

🔗 Links and Resources

TypeLinks
📚 Wiki & FAQRead Our Wiki
<img height="14" src="https://upload.wikimedia.org/wikipedia/commons/6/6f/Logo_of_Twitter.svg" />  Twitter (aka X)Follow us on X
📜 DocumentationRead The Doc
💾 Installationunsloth/README.md
🥇 BenchmarkingPerformance Tables
🌐 Released ModelsUnsloth Releases
✍️ BlogRead our Blogs

⭐ Key Features

🥇 Performance Benchmarking

1 A100 40GB🤗Hugging FaceFlash Attention🦥Unsloth Open Source🦥Unsloth Pro
Alpaca1x1.04x1.98x15.64x
LAION Chip21x0.92x1.61x20.73x
OASST1x1.19x2.17x14.83x
Slim Orca1x1.18x2.22x14.82x
Free Colab T4Dataset🤗Hugging FacePytorch 2.1.1🦥Unsloth🦥 VRAM reduction
Llama-2 7bOASST1x1.19x1.95x-43.3%
Mistral 7bAlpaca1x1.07x1.56x-13.7%
Tiny Llama 1.1bAlpaca1x2.06x3.87x-73.8%
DPO with ZephyrUltra Chat1x1.09x1.55x-18.6%

💾 Installation Instructions

Conda Installation

Select either pytorch-cuda=11.8 for CUDA 11.8 or pytorch-cuda=12.1 for CUDA 12.1. If you have mamba, use mamba instead of conda for faster solving. See this Github issue for help on debugging Conda installs.

conda create --name unsloth_env python=3.10
conda activate unsloth_env

conda install pytorch-cuda=<12.1/11.8> pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers

pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"

pip install --no-deps trl peft accelerate bitsandbytes

Pip Installation

Do NOT use this if you have Anaconda. You must use the Conda install method, or else stuff will BREAK.

  1. Find your CUDA version via
import torch; torch.version.cuda
  1. For Pytorch 2.1.0: You can update Pytorch via Pip (interchange cu121 / cu118). Go to https://pytorch.org/ to learn more. Select either cu118 for CUDA 11.8 or cu121 for CUDA 12.1. If you have a RTX 3060 or higher (A100, H100 etc), use the "ampere" path. For Pytorch 2.1.1: go to step 3. For Pytorch 2.2.0: go to step 4.
pip install --upgrade --force-reinstall --no-cache-dir torch==2.1.0 triton \
  --index-url https://download.pytorch.org/whl/cu121
pip install "unsloth[cu118] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu118-ampere] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-ampere] @ git+https://github.com/unslothai/unsloth.git"
  1. For Pytorch 2.1.1: Use the "ampere" path for newer RTX 30xx GPUs or higher.
pip install --upgrade --force-reinstall --no-cache-dir torch==2.1.1 triton \
  --index-url https://download.pytorch.org/whl/cu121
pip install "unsloth[cu118-torch211] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-torch211] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu118-ampere-torch211] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-ampere-torch211] @ git+https://github.com/unslothai/unsloth.git"
  1. For Pytorch 2.2.0: Use the "ampere" path for newer RTX 30xx GPUs or higher.
pip install --upgrade --force-reinstall --no-cache-dir torch==2.2.0 triton \
  --index-url https://download.pytorch.org/whl/cu121
pip install "unsloth[cu118-torch220] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-torch220] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu118-ampere-torch220] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-ampere-torch220] @ git+https://github.com/unslothai/unsloth.git"
  1. If you get errors, try the below first, then go back to step 1:
pip install --upgrade pip
  1. For Pytorch 2.2.1:
# RTX 3090, 4090 Ampere GPUs:
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes

# Pre Ampere RTX 2080, T4, GTX 1080 GPUs:
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps xformers trl peft accelerate bitsandbytes
  1. To troubleshoot installs try the below (all must succeed). Xformers should mostly all be available.
nvcc
python -m xformers.info
python -m bitsandbytes

📜 Documentation

from unsloth import FastLanguageModel
import torch
from trl import SFTTrainer
from transformers import TrainingArguments
from datasets import load_dataset
max_seq_length = 2048 # Supports RoPE Scaling interally, so choose any!
# Get LAION dataset
url = "https://huggingface.co/datasets/laion/OIG/resolve/main/unified_chip2.jsonl"
dataset = load_dataset("json", data_files = {"train" : url}, split = "train")

# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
    "unsloth/mistral-7b-bnb-4bit",
    "unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
    "unsloth/llama-2-7b-bnb-4bit",
    "unsloth/gemma-7b-bnb-4bit",
    "unsloth/gemma-7b-it-bnb-4bit", # Instruct version of Gemma 7b
    "unsloth/gemma-2b-bnb-4bit",
    "unsloth/gemma-2b-it-bnb-4bit", # Instruct version of Gemma 2b
    "unsloth/llama-3-8b-bnb-4bit", # [NEW] 15 Trillion token Llama-3
    "unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
] # More models at https://huggingface.co/unsloth

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/llama-3-8b-bnb-4bit",
    max_seq_length = max_seq_length,
    dtype = None,
    load_in_4bit = True,
)

# Do model patching and add fast LoRA weights
model = FastLanguageModel.get_peft_model(
    model,
    r = 16,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 16,
    lora_dropout = 0, # Supports any, but = 0 is optimized
    bias = "none",    # Supports any, but = "none" is optimized
    # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
    use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
    random_state = 3407,
    max_seq_length = max_seq_length,
    use_rslora = False,  # We support rank stabilized LoRA
    loftq_config = None, # And LoftQ
)

trainer = SFTTrainer(
    model = model,
    train_dataset = dataset,
    dataset_text_field = "text",
    max_seq_length = max_seq_length,
    tokenizer = tokenizer,
    args = TrainingArguments(
        per_device_train_batch_size = 2,
        gradient_accumulation_steps = 4,
        warmup_steps = 10,
        max_steps = 60,
        fp16 = not torch.cuda.is_bf16_supported(),
        bf16 = torch.cuda.is_bf16_supported(),
        logging_steps = 1,
        output_dir = "outputs",
        optim = "adamw_8bit",
        seed = 3407,
    ),
)
trainer.train()

# Go to https://github.com/unslothai/unsloth/wiki for advanced tips like
# (1) Saving to GGUF / merging to 16bit for vLLM
# (2) Continued training from a saved LoRA adapter
# (3) Adding an evaluation loop / OOMs
# (4) Cutomized chat templates

<a name="DPO"></a>

DPO Support

DPO (Direct Preference Optimization), PPO, Reward Modelling all seem to work as per 3rd party independent testing from Llama-Factory. We have a preliminary Google Colab notebook for reproducing Zephyr on Tesla T4 here: notebook.

We're in 🤗Hugging Face's official docs! We're on the SFT docs and the DPO docs!

from unsloth import FastLanguageModel, PatchDPOTrainer
PatchDPOTrainer()
import torch
from transformers import TrainingArguments
from trl import DPOTrainer

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/zephyr-sft-bnb-4bit",
    max_seq_length = max_seq_length,
    dtype = None,
    load_in_4bit = True,
)

# Do model patching and add fast LoRA weights
model = FastLanguageModel.get_peft_model(
    model,
    r = 64,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 64,
    lora_dropout = 0, # Supports any, but = 0 is optimized
    bias = "none",    # Supports any, but = "none" is optimized
    # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
    use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
    random_state = 3407,
    max_seq_length = max_seq_length,
)

dpo_trainer = DPOTrainer(
    model = model,
    ref_model = None,
    args = TrainingArguments(
        per_device_train_batch_size = 4,
        gradient_accumulation_steps = 8,
        warmup_ratio = 0.1,
        num_train_epochs = 3,
        fp16 = not torch.cuda.is_bf16_supported(),
        bf16 = torch.cuda.is_bf16_supported(),
        logging_steps = 1,
        optim = "adamw_8bit",
        seed = 42,
        output_dir = "outputs",
    ),
    beta = 0.1,
    train_dataset = YOUR_DATASET_HERE,
    # eval_dataset = YOUR_DATASET_HERE,
    tokenizer = tokenizer,
    max_length = 1024,
    max_prompt_length = 512,
)
dpo_trainer.train()

🥇 Detailed Benchmarking Tables

1 A100 40GB🤗Hugging FaceFlash Attention 2🦥Unsloth OpenUnsloth EqualUnsloth ProUnsloth Max
Alpaca1x1.04x1.98x2.48x5.32x15.64x
codeCodeCodeCodeCode
seconds1040100152541919667
memory MB182351536596318525
% saved15.7447.1853.25

Llama-Factory 3rd party benchmarking

MethodBitsTGSGRAMSpeed
HF16239218GB100%
HF+FA216295417GB123%
Unsloth+FA216400716GB168%
HF424159GB101%
Unsloth+FA2437267GB160%

Performance comparisons between popular models

<details> <summary>Click for specific model benchmarking tables (Mistral 7b, CodeLlama 34b etc.)</summary>

Mistral 7b

1 A100 40GBHugging FaceFlash Attention 2Unsloth OpenUnsloth EqualUnsloth ProUnsloth Max
Mistral 7B Slim Orca1x1.15x2.15x2.53x4.61x13.69x
codeCodeCodeCodeCode
seconds18131571842718393132
memory MB32853193851246510271
% saved40.9962.0668.74

CodeLlama 34b

1 A100 40GBHugging FaceFlash Attention 2Unsloth OpenUnsloth EqualUnsloth ProUnsloth Max
Code Llama 34BOOM ❌0.99x1.87x2.61x4.27x12.82x
code▶️ CodeCodeCodeCode
seconds195319821043748458152
memory MB40000332172741322161
% saved16.9631.4744.60

1 Tesla T4

1 T4 16GBHugging FaceFlash AttentionUnsloth OpenUnsloth Pro EqualUnsloth ProUnsloth Max
Alpaca1x1.09x1.69x1.79x2.93x8.3x
code▶️ CodeCodeCodeCode
seconds15991468942894545193
memory MB7199705964595443
% saved1.9410.2824.39

2 Tesla T4s via DDP

2 T4 DDPHugging FaceFlash AttentionUnsloth OpenUnsloth EqualUnsloth ProUnsloth Max
Alpaca1x0.99x4.95x4.44x7.28x20.61x
code▶️ CodeCodeCode
seconds98829946199622271357480
memory MB9176912869046782
% saved0.5224.7626.09
</details>

Performance comparisons on 1 Tesla T4 GPU:

<details> <summary>Click for Time taken for 1 epoch</summary>

One Tesla T4 on Google Colab bsz = 2, ga = 4, max_grad_norm = 0.3, num_train_epochs = 1, seed = 3047, lr = 2e-4, wd = 0.01, optim = "adamw_8bit", schedule = "linear", schedule_steps = 10

SystemGPUAlpaca (52K)LAION OIG (210K)Open Assistant (10K)SlimOrca (518K)
Huggingface1 T423h 15m56h 28m8h 38m391h 41m
Unsloth Open1 T413h 7m (1.8x)31h 47m (1.8x)4h 27m (1.9x)240h 4m (1.6x)
Unsloth Pro1 T43h 6m (7.5x)5h 17m (10.7x)1h 7m (7.7x)59h 53m (6.5x)
Unsloth Max1 T42h 39m (8.8x)4h 31m (12.5x)0h 58m (8.9x)51h 30m (7.6x)

Peak Memory Usage

SystemGPUAlpaca (52K)LAION OIG (210K)Open Assistant (10K)SlimOrca (518K)
Huggingface1 T47.3GB5.9GB14.0GB13.3GB
Unsloth Open1 T46.8GB5.7GB7.8GB7.7GB
Unsloth Pro1 T46.4GB6.4GB6.4GB6.4GB
Unsloth Max1 T411.4GB12.4GB11.9GB14.4GB
</details> <details> <summary>Click for Performance Comparisons on 2 Tesla T4 GPUs via DDP:</summary> **Time taken for 1 epoch**

Two Tesla T4s on Kaggle bsz = 2, ga = 4, max_grad_norm = 0.3, num_train_epochs = 1, seed = 3047, lr = 2e-4, wd = 0.01, optim = "adamw_8bit", schedule = "linear", schedule_steps = 10

SystemGPUAlpaca (52K)LAION OIG (210K)Open Assistant (10K)SlimOrca (518K) *
Huggingface2 T484h 47m163h 48m30h 51m1301h 24m *
Unsloth Pro2 T43h 20m (25.4x)5h 43m (28.7x)1h 12m (25.7x)71h 40m (18.1x) *
Unsloth Max2 T43h 4m (27.6x)5h 14m (31.3x)1h 6m (28.1x)54h 20m (23.9x) *

Peak Memory Usage on a Multi GPU System (2 GPUs)

SystemGPUAlpaca (52K)LAION OIG (210K)Open Assistant (10K)SlimOrca (518K) *
Huggingface2 T48.4GB | 6GB7.2GB | 5.3GB14.3GB | 6.6GB10.9GB | 5.9GB *
Unsloth Pro2 T47.7GB | 4.9GB7.5GB | 4.9GB8.5GB | 4.9GB6.2GB | 4.7GB *
Unsloth Max2 T410.5GB | 5GB10.6GB | 5GB10.6GB | 5GB10.5GB | 5GB *
</details>

<br>

Thank You to