Home

Awesome

cortex.onnx

cortex.onnx is a high-efficiency C++ inference engine for edge computing focusing on Windows platform using DirectML for GPU acceleration.

It is a dynamic library that can be loaded by any server at runtime.

Repo Structure

.
├── base -> Engine interface
├── examples -> Server example to integrate engine
├── onnxruntime-genai -> Upstream onnxruntime-genai
├── src -> Engine implementation
├── third-party -> Dependencies of the cortex.onnx project

Build from source

This guide provides step-by-step instructions for building cortex.onnx from source on Windows systems.

Clone the Repository

First, you need to clone the cortex.onnx repository:

git clone --recurse https://github.com/janhq/cortex.onnx.git

If you don't have git, you can download the source code as a file archive from cortex.onnx GitHub.

Build library with server example

Quickstart

Step 1: Downloading a Model

Clone a model from https://huggingface.co/cortexhub, checkout to dml branch

Step 2: Start server

Step 3: Load model

curl http://localhost:3928/loadmodel \
  -H 'Content-Type: application/json' \
  -d '{
    "model_path": "./model/llama3",
    "model_alias": "llama3",
    "system_prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n",
    "user_prompt": "<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n",
    "ai_prompt": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
  }'

Step 4: Making an Inference

curl http://localhost:3928/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Who won the world series in 2020?"
      }
    ],
    "model": "llama3"
  }'

Table of parameters

ParameterTypeDescription
model_pathStringThe file path to the onnx model.