Home

Awesome

<a href="README_CN.md">简体中文</a>

onnx-tool

A tool for ONNX model:

Supported Models:


Build LLM model and profile

<a id="build-profile"></a> Profile 10 hugging face models within one second. Save the ONNX models as simple as llama.cpp's. code ref

model name(1k input)MACs(G)Parameters(G)KV Cache(G)
gpt-j-6b62776.050490.234881
yi-1.5-34B3586234.38890.125829
microsoft/phi-229482.779440.167772
Phi-3-mini-4k40833.821080.201327
Phi-3-small-8k-instruct79127.801670.0671089
Phi-3-medium-4k-instruct1466513.96020.104858
Llama3-8B80298.030260.0671089
Llama-3.1-70B-Japanese-Instruct-24077288870.55370.167772
QWen-7B75097.615620.0293601
Qwen2_72B_Instruct7489572.70620.167772

Get first-token latency and next-token latency from hardware specs.

model_type_4bit_kv16bitmemory_size(GB)Ultra-155H_first_latencyUltra-155H_next_latencyArc-A770_first_latencyArc-A770_next_latencyH100-PCIe_first_latencyH100-PCIe_next_latency
gpt-j-6b3.756781.09470.0417420.09168820.006708530.01640150.00187839
yi-1.5-34B19.33695.770950.2148540.453440.03453020.07478540.00966844
microsoft/phi-21.824850.583610.02027610.05296280.003258660.0103380.000912425
Phi-3-mini-4k2.496490.8111730.02773880.07453560.004458020.01472740.00124825
Phi-3-small-8k-instruct4.29131.389850.04768110.1175120.007663030.02125350.00214565
Phi-3-medium-4k-instruct7.969772.44630.0885530.1982490.01423170.03405760.00398489
Llama3-8B4.355591.43540.04839540.1233330.007777840.02271820.00217779
Llama-3.1-70B-Japanese-Instruct-240739.430311.35410.4381140.8684750.07041120.1379010.0197151
QWen-7B4.035761.349830.04484170.117220.007206710.02184610.00201788
Qwen2_72B_Instruct40.530911.65340.4503430.8908160.07237660.141320.0202654

Basic Parse and Edit

<a id="basic-parse-edit"></a> You can load any onnx file by onnx_tool.Model:
Change graph structure with onnx_tool.Graph;
Change op attributes and IO tensors with onnx_tool.Node;
Change tensor data or type with onnx_tool.Tensor.
To apply your changes, just call save_model method of onnx_tool.Model or onnx_tool.Graph.

Please refer benchmark/examples.py.


Shape Inference & Profile Model

<a id="shapeinfer-profile"></a> All profiling data must be built on shape inference result.
ONNX graph with tensor shapes:

<p align="center"> <img src="data/shape_inference.jpg"> </p> Regular model profiling table: <p align="center"> <img src="data/macs_counting.png"> </p> <br><br> Sparse profiling table: <p id="sparsity" align="center"> <img src="data/sparse_model.png"> </p> <br><br>

Introduction: data/Profile.md.
pytorch usage: data/PytorchUsage.md.
tensorflow usage: data/TensorflowUsage.md.
examples: benchmark/examples.py.


Compute Graph with Shape Engine

<a id="compute_graph-header"></a> From a raw graph to a compute graph:

<p id="compute_graph" align="center"> <img src="data/compute_graph.png"> </p>

Remove shape calculation layers(created by ONNX export) to get a Compute Graph. Use Shape Engine to update tensor shapes at runtime.
Examples: benchmark/shape_regress.py. benchmark/examples.py.
Integrate Compute Graph and Shape Engine into a cpp inference engine: data/inference_engine.md


Memory Compression

<a id="memory-compression"></a>

Activation Compression

Activation memory also called temporary memory is created by each OP's output. Only the last activation marked as the model's output will be kept. So you don't have to prepare memory space for each activation tensor. They better reuse an optimized memory size.

For large language models and high-resolution CV models, the activation memory compression is a key to save memory.
The compression method achieves 5% memory compression on most models.
For example:

modelNative Memory Size(MB)Compressed Memory Size(MB)Compression Ratio(%)
StableDiffusion(VAE_encoder)14,2455403.7
StableDiffusion(VAE_decoder)25,4171,1404.48
StableDiffusion(Text_encoder)21552.5
StableDiffusion(UNet)36,1352,2326.2
GPT24026.9
BERT2,170271.25

code example: benchmark/compression.py

Weight Compression

A fp32 model with 7B parameters will take 28GB disk space and memory space. You can not even run the model if your device doesn't have that much memory space. So weight compression is critical to run large language models. As a reference, 7B model with int4 symmetric per block(32) quantization(llama.cpp's q4_0 quantization method) only has ~0.156x model size compared with fp32 model.

Current support:

code examples:benchmark/examples.py.


How to install

pip install onnx-tool

OR

pip install --upgrade git+https://github.com/ThanatosShinji/onnx-tool.git

python>=3.6

If pip install onnx-tool failed by onnx's installation, you may try pip install onnx==1.8.1 (a lower version like this) first.
Then pip install onnx-tool again.


Known Issues


Results of ONNX Model Zoo and SOTA models

<a id='models'></a> Some models have dynamic input shapes. The MACs varies from input shapes. The input shapes used in these results are writen to data/public/config.py. These onnx models with all tensors' shape can be downloaded: baidu drive(code: p91k) google drive

<p id="results" align="center"> <table> <tr> <td>
ModelParams(M)MACs(M)
<a href="benchmark/transfomer_models.py">GPT-J 1 layer</a>464173,398
<a href="benchmark/transfomer_models.py">MPT 1 layer</a>26179,894
text_encoder123.136,782
UNet2DCondition859.52888,870
VAE_encoder34.16566,371
VAE_decoder49.491,271,959
SqueezeNet 1.01.23351
AlexNet60.96665
GoogleNet6.991,606
googlenet_age5.981,605
LResNet100E-IR65.2212,102
BERT-Squad113.6122,767
BiDAF18.089.87
EfficientNet-Lite412.961,361
Emotion12.95877
Mask R-CNN46.7792,077
</td> <td>
ModelParams(M)MACs(M)
<a href="benchmark/transfomer_models.py">LLaMa 1 layer</a>618211,801
BEVFormer Tiny33.7210,838
rvm_mobilenetv33.734,289
yolov464.333,319
ConvNeXt-L229.7934,872
edgenext_small5.581,357
SSD19.98216,598
RealESRGAN16.6973,551
ShuffleNet2.29146
GPT-2137.021,103
T5-encoder109.62686
T5-decoder162.621,113
RoBERTa-BASE124.64688
Faster R-CNN44.1046,018
FCN ResNet-5035.2937,056
ResNet50253,868
</td> </tr> </table> </p>