Home

Awesome

TinyEngine

This is the official implementation of TinyEngine, a memory-efficient and high-performance neural network library for Microcontrollers. TinyEngine is a part of MCUNet, which also consists of TinyNAS. MCUNet is a system-algorithm co-design framework for tiny deep learning on microcontrollers. TinyEngine and TinyNAS are co-designed to fit the tight memory budgets.

The MCUNet and TinyNAS repo is here.

TinyML Project Website | MCUNetV1 | MCUNetV2 | MCUNetV3

Demo (Inference)

demo

Demo (Training)

demo_v3

News

If you are interested in getting updates, please sign up here to get notified!

Overview

Microcontrollers are low-cost, low-power hardware. They are widely deployed and have wide applications, but the tight memory budget (50,000x smaller than GPUs) makes deep learning deployment difficult.

MCUNet is a system-algorithm co-design framework for tiny deep learning on microcontrollers. It consists of TinyNAS and TinyEngine. They are co-designed to fit the tight memory budgets. With system-algorithm co-design, we can significantly improve the deep learning performance on the same tiny memory budget.

overview

Specifically, TinyEngine is a memory-efficient inference library. TinyEngine adapts the memory scheduling according to the overall network topology rather than layer-wise optimization, reducing memory usage and accelerating the inference. It outperforms existing inference libraries such as TF-Lite Micro from Google, CMSIS-NN from Arm, and X-CUBE-AI from STMicroelectronics.

TinyEngine adopts the following optimization techniques to accelerate inference speed and minimize memory footprint.

inplace_depthwise

By adopting the abovementioned optimization techniques, TinyEngine can not only enhance inference speed but also reduce peak memory, as shown in the figures below.

MAC/s improvement breakdown: mac_result

Peak memory reduction: peakmem_result

To sum up, our TinyEngine inference engine could be a useful infrastructure for MCU-based AI applications. It significantly improves the inference speed and reduces the memory usage compared to existing libraries like TF-Lite Micro, CMSIS-NN, X-CUBE-AI, etc. It improves the inference speed by 1.1-18.6x, and reduces the peak memory by 1.3-3.6x.

measured_result

Save Memory with Patch-based Inference: We can dramastically reduce the inference peak memory by using patch-based inference for the memory-intensive stage of CNNs. measured_result

For MobileNetV2, using patch-based inference allows us to reduce the peak memory by 8x. measured_result

With patch-based infernece, tinyengine achieves higher accuracy at the same memory budgets. measured_result

Code Structure

code_generator contains a python library that is used to compile neural networks into low-level source code (C/C++).

TinyEngine contains a C/C++ library that implements operators and performs inference on Microcontrollers.

examples contains the examples of transforming TFLite models into our TinyEngine models.

tutorial contains the demo tutorial (of inference and training) of deploying a visual wake words (VWW) model onto microcontrollers.

assets contains misc assets.

Requirement

Setup for Users

First, clone this repository:

git clone --recursive https://github.com/mit-han-lab/tinyengine.git

(Optional) Using a virtual environment with conda is recommended.

conda create -n tinyengine python=3.6 pip
conda activate tinyengine

Install dependencies:

pip install -r requirements.txt

Setup for Developers

Install pre-commit hooks to automatically format changes in your code.

pre-commit install

Deployment Example

Please see tutorial to learn how to deploy a visual wake words (VWW) model onto microcontrollers by using TinyEngine. We include both the inference demo and the training demo in the tutorial, please take a look!

Measured Results

The latency results:

net_idTF-Lite Micro<br>@ 713b6edCMSIS-NN<br>@ 011bf32X-CUBE-AI<br>v7.3.0TinyEngine<br>@ 0363956
# mcunet models (VWW)
mcunet-vww0587ms53ms32ms27ms
mcunet-vww11120ms97ms57ms51ms
mcunet-vww25310ms478ms269ms234ms
# mcunet models (ImageNet)
mcunet-in0586ms51ms35ms25ms
mcunet-in11227ms103ms63ms56ms
mcunet-in26463ms642ms351ms280ms
mcunet-in37821ms770ms414ms336ms
mcunet-in4OOMOOM516ms463ms
# baseline models
proxyless-w0.3-r64512ms54kB35kB23kB
proxyless-w0.3-r1763801ms380ms205ms176ms
mbv2-w0.3-r64467ms43ms29ms23ms

The peak memory (SRAM) results:

net_idTF-Lite Micro<br>@ 713b6edCMSIS-NN<br>@ 011bf32X-CUBE-AI<br>v7.3.0TinyEngine<br>@ 0363956
# mcunet models (VWW)
mcunet-vww0163kB163kB88kB59kB
mcunet-vww1220kB220kB113kB92kB
mcunet-vww2385kB390kB201kB174kB
# mcunet models (ImageNet)
mcunet-in0161kB161kB69kB49kB
mcunet-in1219kB219kB106kB96kB
mcunet-in2460kB469kB238kB215kB
mcunet-in3493kB493kB243kB260kB
mcunet-in4OOMOOM342kB416kB
# baseline models
proxyless-w0.3-r64128kB136kB97kB35kB
proxyless-w0.3-r176453kB453kB221kB259kB
mbv2-w0.3-r64173kB173kB88kB61kB

The Flash memory usage results:

net_idTF-Lite Micro<br>@ 713b6edCMSIS-NN<br>@ 011bf32X-CUBE-AI<br>v7.3.0TinyEngine<br>@ 0363956
# mcunet models (VWW)
mcunet-vww0627kB646kB463kB453kB
mcunet-vww1718kB736kB534kB521kB
mcunet-vww21016kB1034kB774kB741kB
# mcunet models (ImageNet)
mcunet-in01072kB1090kB856kB842kB
mcunet-in1937kB956kB737kB727kB
mcunet-in21084kB1102kB849kB830kB
mcunet-in31091kB1106kB867kB835kB
mcunet-in4OOMOOM1843kB1825kB
# baseline models
proxyless-w0.3-r641065kB1084kB865kB777kB
proxyless-w0.3-r1761065kB1084kB865kB779kB
mbv2-w0.3-r64940kB959kB768kB690kB

Citation

If you find the project helpful, please consider citing our paper:

@article{
  lin2020mcunet,
  title={Mcunet: Tiny deep learning on iot devices},
  author={Lin, Ji and Chen, Wei-Ming and Lin, Yujun and Gan, Chuang and Han, Song},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

@inproceedings{
  lin2021mcunetv2,
  title={MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning},
  author={Lin, Ji and Chen, Wei-Ming and Cai, Han and Gan, Chuang and Han, Song},
  booktitle={Annual Conference on Neural Information Processing Systems (NeurIPS)},
  year={2021}
}

@article{
  lin2022ondevice,
  title = {On-Device Training Under 256KB Memory},
  author = {Lin, Ji and Zhu, Ligeng and Chen, Wei-Ming and Wang, Wei-Chen and Gan, Chuang and Han, Song},
  booktitle={Annual Conference on Neural Information Processing Systems (NeurIPS)},
  year = {2022}
}

Related Projects

MCUNet: Tiny Deep Learning on IoT Devices (NeurIPS'20)

MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning (NeurIPS'21)

MCUNetV3: On-Device Training Under 256KB Memory (NeurIPS'22)