Home

Awesome

<br> <div align="center"> <img src="Arm_NN_horizontal_blue.png" alt="Arm NN Logo" width="300"/> </div>

Arm NN

Arm NN is the most performant machine learning (ML) inference engine for Android and Linux, accelerating ML on Arm Cortex-A CPUs and Arm Mali GPUs. This ML inference engine is an open source SDK which bridges the gap between existing neural network frameworks and power-efficient Arm IP.

Arm NN outperforms generic ML libraries due to Arm architecture-specific optimizations (e.g. SVE2) by utilizing Arm Compute Library (ACL). To target Arm Ethos-N NPUs, Arm NN utilizes the Ethos-N NPU Driver. For Arm Cortex-M acceleration, please see CMSIS-NN.

Arm NN is written using portable C++17 and built using CMake - enabling builds for a wide variety of target platforms, from a wide variety of host environments. Python developers can interface with Arm NN through the use of our Arm NN TF Lite Delegate.

Quick Start Guides

The Arm NN TF Lite Delegate provides the widest ML operator support in Arm NN and is an easy way to accelerate your ML model. To start using the TF Lite Delegate, first download the Pre-Built Binaries for the latest release of Arm NN. Using a Python interpreter, you can load your TF Lite model into the Arm NN TF Lite Delegate and run accelerated inference. Please see this Quick Start Guide on GitHub or this more comprehensive Arm Developer Guide for information on how to accelerate your TF Lite model using the Arm NN TF Lite Delegate.

We provide Debian packages for Arm NN, which are a quick way to start using Arm NN and the TF Lite Parser (albeit with less ML operator support than the TF Lite Delegate). There is an installation guide available here which provides instructions on how to install the Arm NN Core and the TF Lite Parser for Ubuntu 20.04.

To build Arm NN from scratch, we provide the Arm NN Build Tool. This tool consists of parameterized bash scripts accompanied by a Dockerfile for building Arm NN and its dependencies, including Arm Compute Library (ACL). This tool replaces/supersedes the majority of the existing Arm NN build guides as a user-friendly way to build Arm NN. The main benefit of building Arm NN from scratch is the ability to exactly choose which components to build, targeted for your ML project.<br>

Pre-Built Binaries

Operating SystemArchitecture-specific Release Archive (Download)
Android 11 "R/Red Velvet Cake" (API level 30)
Android 12 "S/Snow Cone" (API level 31)
Android 13 "T/Tiramisu" (API level 33)
Android 14 "U/Upside Down Cake" (API level 34)

Arm NN also provides pre-built multi-isa binaries for Android. The v8a binary includes support from basic v8a architecture and upwards. The v8.2a binary includes support from v8.2a and upwards. These include support for SVE, SVE2, FP16 and some dot product kernels. These kernels need appropriate hardware to work on.

Multi ISA ArchitectureRelease Archive (Download)
Linux Arm v8a
Linux Arm v8.2a
Android 31 v8a
Android 31 v8.2a

Software Overview

The Arm NN SDK supports ML models in TensorFlow Lite (TF Lite) and ONNX formats.

Arm NN's TF Lite Delegate accelerates TF Lite models through Python or C++ APIs. Supported TF Lite operators are accelerated by Arm NN and any unsupported operators are delegated (fallback) to the reference TF Lite runtime - ensuring extensive ML operator support. The recommended way to use Arm NN is to convert your model to TF Lite format and use the TF Lite Delegate. Please refer to the Quick Start Guides for more information on how to use the TF Lite Delegate.

Arm NN also provides TF Lite and ONNX parsers which are C++ libraries for integrating TF Lite or ONNX models into your ML application. Please note that these parsers do not provide extensive ML operator coverage as compared to the Arm NN TF Lite Delegate.

Android ML application developers have a number of options for using Arm NN:

Arm also provides an Android-NN-Driver which implements a hardware abstraction layer (HAL) for the Android NNAPI. When the Android NN Driver is integrated on an Android device, ML models used in Android applications will automatically be accelerated by Arm NN.

For more information about the Arm NN components, please refer to our documentation.

Arm NN is a key component of the machine learning platform, which is part of the Linaro Machine Intelligence Initiative.

For FAQs and troubleshooting advice, see the FAQ or take a look at previous GitHub Issues.

Get Involved

The best way to get involved is by using our software. If you need help or encounter an issue, please raise it as a GitHub Issue. Feel free to have a look at any of our open issues too. We also welcome feedback on our documentation.

Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be found here. Once you find a suitable Issue, feel free to re-open it and add a comment, so that Arm NN engineers know you are working on it and can help.

When the feature is implemented the 'Help wanted' label will be removed.

Contributions

The Arm NN project welcomes contributions. For more details on contributing to Arm NN please see the Contributing page on the MLPlatform.org website, or see the Contributor Guide.

Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for backend development: Backend development guide, Dynamic backend development guide.

Disclaimer

The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model protobufs and image files not distributed with Arm NN. The dependencies for some tests are available freely on the Internet, for those who wish to experiment, but they won't run out of the box.

License

Arm NN is provided under the MIT license. See LICENSE for more information. Contributions to this project are accepted under the same license.

Individual files contain the following tag instead of the full license text.

SPDX-License-Identifier: MIT

This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/

Inclusive language commitment

Arm NN conforms to Arm's inclusive language policy and, to the best of our knowledge, does not contain any non-inclusive language.

If you find something that concerns you, please email terms@arm.com

Third-party

Third party tools used by Arm NN:

ToolLicense (SPDX ID)DescriptionVersionProvenience
cxxoptsMITA lightweight C++ option parser library3.1.1https://github.com/jarro2783/cxxopts
doctestMITHeader-only C++ testing framework2.4.6https://github.com/onqtam/doctest
fmtMIT{fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams.8.30https://github.com/fmtlib/fmt
ghcMITA header-only single-file std::filesystem compatible helper library1.3.2https://github.com/gulrak/filesystem
halfMITIEEE 754 conformant 16-bit half-precision floating point library1.12.0http://half.sourceforge.net
mapbox/variantBSDA header-only alternative to 'boost::variant'1.1.3https://github.com/mapbox/variant
stbMITImage loader, resize and writer2.16https://github.com/nothings/stb

Build Flags

Arm NN uses the following security related build flags in their code:

Build flags
-Wall
-Wextra
-Wold-style-cast
-Wno-missing-braces
-Wconversion
-Wsign-conversion
-Werror