Home

Awesome

<div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="docs/source/images/olive-white-text.png"> <source media="(prefers-color-scheme: light)" srcset="docs/source/images/olive-black-text.png"> <img alt="olive text" src="docs/source/images/olive-black-text.png" height="100" style="max-width: 100%;"> </picture>

PyPI release Documentation

AI Model Optimization Toolkit for the ONNX Runtime

</div>

Given a model and targeted hardware, Olive (abbreviation of Onnx LIVE) composes the best suitable optimization techniques to output the most efficient ONNX model(s) for inferring on cloud or edge, while taking a set of constraints such as accuracy and latency into consideration.

✅ Benefits of using Olive

📰 News Highlights

Here are some recent videos, blog articles and labs that highlight Olive:

For a full list of news and blogs, read the news archive.

🚀 Getting Started

Notebooks available!

The following notebooks are available that demonstrate key optimization workflows with Olive:

TitleDescriptionTime RequiredNotebook Links
QuickstartIn this notebook you will use Olive's automatic optimizer to ONNX Runtime on a CPU Device and then inference the model using the ONNX Runtime Generate API5minsDownload / Open in Colab
Quantize and FinetuneIn this notebook you will (1) quantize Llama-3.2-1B-Instruct using the AWQ algorithm, (2) fine-tune the quantized model, (3) Optimize the fine-tuned model for the ONNX Runtime, and (4) Inference the fine-tuned model using the ONNX runtime Generate API.15minsDownload / Open in Colab

✨ Quickstart

If you prefer not to use Jupyter notebooks then you can run through the following steps.

1. Install Olive CLI

We recommend installing Olive in a virtual environment or a conda environment.

pip install olive-ai[ort-genai,auto-opt]
pip install transformers==4.44.2

[!NOTE] Olive has optional dependencies that can be installed to enable additional features. Please refer to Olive package config for the list of extras and their dependencies.

2. Automatic Optimizer

In this quickstart you'll be optimizing HuggingFaceTB/SmolLM2-135M-Instruct, which has many model files in the Hugging Face repo for different precisions that are not required by Olive. To minimize the download, cache the original Hugging Face model files (safetensors and configuration) in the main folder of the Hugging Face repo using:

huggingface-cli download HuggingFaceTB/SmolLM2-135M-Instruct *.json *.safetensors *.txt

Next, run the automatic optimization (tip: if you're using Powershell rather than bash replace \ line continuations with `):

olive auto-opt \
    --model_name_or_path HuggingFaceTB/SmolLM2-135M-Instruct \
    --output_path models/smolm2 \
    --device cpu \
    --provider CPUExecutionProvider \
    --use_ort_genai \
    --precision int4 \
    --log_level 1

The automatic optimizer will:

  1. Acquire the model from the local cache (note: if you skipped the model download step then the entire contents of the Hugging Face model repo will be downloaded).
  2. Capture the ONNX Graph and store the weights in an ONNX data file.
  3. Optimize the ONNX Graph.
  4. Quantize the model to int4 using RTN method.

[!TIP] Olive can automatically optimize popular model architectures like Llama, Phi, Qwen, Gemma, etc out-of-the-box - see detailed list here. Also, you can optimize other model architectures by providing details on the input/outputs of the model (io_config).

3. Inference on the ONNX Runtime

The ONNX Runtime (ORT) is a fast and light-weight cross-platform inference engine with bindings for popular programming language such as Python, C/C++, C#, Java, JavaScript, etc. ORT enables you to infuse AI models into your applications so that inference is handled on-device. The following code creates a simple console-based chat interface that inferences your optimized model - you can choose between Python or C#.

You'll be prompted to enter a message to the SLM - for example, you could ask what is the golden ratio, or def print_hello_world():. To exit type exit in the chat interface.

Python Option

Create a Python file called app.py and copy and paste the following code:

# app.py
import onnxruntime_genai as og

model_folder = "models/smolm2/model"

# Load the base model and tokenizer
model = og.Model(model_folder)
tokenizer = og.Tokenizer(model)
tokenizer_stream = tokenizer.create_stream()

# Set the max length to something sensible by default,
# since otherwise it will be set to the entire context length
search_options = {}
search_options['max_length'] = 200
search_options['past_present_share_buffer'] = False

chat_template = "<|im_start|>user\n{input}<|im_end|>\n<|im_start|>assistant\n"

text = input("Input: ")

# Keep asking for input phrases
while text != "exit":
    if not text:
        print("Error, input cannot be empty")
        exit

    # generate prompt (prompt template + input)
    prompt = f'{chat_template.format(input=text)}'

    # encode the prompt using the tokenizer
    input_tokens = tokenizer.encode(prompt)

    params = og.GeneratorParams(model)
    params.set_search_options(**search_options)
    params.input_ids = input_tokens
    generator = og.Generator(model, params)

    print("Output: ", end='', flush=True)
    # stream the output
    try:
        while not generator.is_done():
            generator.compute_logits()
            generator.generate_next_token()

            new_token = generator.get_next_tokens()[0]
            print(tokenizer_stream.decode(new_token), end='', flush=True)
    except KeyboardInterrupt:
        print("  --control+c pressed, aborting generation--")

    print()
    text = input("Input: ")

To run the code, execute python app.py.

C# Option

Create a new C# Console app and install the Microsoft.ML.OnnxRuntimeGenAI Nuget package into your project:

mkdir ortapp
cd ortapp
dotnet new console
dotnet add package Microsoft.ML.OnnxRuntimeGenAI --version 0.5.2

Next, copy-and-paste the following code into your Program.cs file and update modelPath variable to be the absolute path of where you stored your optimized model.

// Program.cs
using Microsoft.ML.OnnxRuntimeGenAI;

internal class Program
{
    private static void Main(string[] args)
    {
        string modelPath @"models/smolm2/model";

        Console.Write("Loading model from " + modelPath + "...");
        using Model model = new(modelPath);
        Console.Write("Done\n");
        using Tokenizer tokenizer = new(model);
        using TokenizerStream tokenizerStream = tokenizer.CreateStream();


        while (true)
        {
            Console.Write("User:");

            string prompt = "<|im_start|>user\n" +
                            Console.ReadLine() +
                            "<|im_end|>\n<|im_start|>assistant\n";
            var sequences = tokenizer.Encode(prompt);

            using GeneratorParams gParams = new GeneratorParams(model);
            gParams.SetSearchOption("max_length", 200);
            gParams.SetInputSequences(sequences);
            gParams.SetSearchOption("past_present_share_buffer", false);
            Console.Out.Write("\nAI:");

            using Generator generator = new(model, gParams);
            while (!generator.IsDone())
            {
                generator.ComputeLogits();
                generator.GenerateNextToken();
                var token = generator.GetSequence(0)[^1];
                Console.Out.Write(tokenizerStream.Decode(token));
                Console.Out.Flush();
            }
            Console.WriteLine();
        }
    }
}

Run the application:

dotnet run

🎓 Learn more

🤝 Contributions and Feedback

⚖️ License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT License.

Pipeline Status

Build Status Build Status Build Status