Home

Awesome

Backend \ OSWindowsLinux
null (for unit test)null backend
DirectMLXDirectMLX backend (Windows)<br>Node Binding (DirectMLX backend / Windows)<br>Memory leak check - DirectMLX backend (Windows)
OpenVINOOpenVINO backend (Windows)<br>Node Binding (OpenVINO backend / Windows)OpenVINO backend (Linux)<br>Node Binding (OpenVINO backend / Linux)
XNNPACKXNNPACK backend (Windows)XNNPACK backend (Linux)
oneDNNoneDNN backend (Windows)oneDNN backend (Linux)
MLASMLAS backend (Windows)

clang format

WebNN-native

WebNN-native is a native implementation of the Web Neural Network API.

It provides several building blocks:

WebNN-native uses the code of other open source projects:

Build and Run

Install depot_tools

WebNN-native uses the Chromium build system and dependency management so you need to install depot_tools and add it to the PATH.

Notes:

Get the code

Get the source code as follows:

# Clone the repo as "webnn-native"
> git clone https://github.com/webmachinelearning/webnn-native.git webnn-native && cd webnn-native

# Bootstrap the gclient configuration
> cp scripts/standalone.gclient .gclient

# Fetch external dependencies and toolchains with gclient
> gclient sync

Setting up the build

Generate build files using gn args out/Debug or gn args out/Release.

A text editor will appear asking build options, the most common option is is_debug=true/false; otherwise gn args out/Release --list shows all the possible options.

To build with a backend, please set the corresponding option from following table.

BackendOption
DirectMLwebnn_enable_dml=true
DirectMLXwebnn_enable_dmlx=true
OpenVINOwebnn_enable_openvino=true
XNNPACKwebnn_enable_xnnpack=true
oneDNNwebnn_enable_onednn=true
MLASwebnn_enable_mlas=true

Build

Then use ninja -C out/Release or ninja -C out/Debug to build WebNN-native.

Notes

Run tests

Run unit tests:

> ./out/Release/webnn_unittests

Run end2end tests on a default device:

> ./out/Release/webnn_end2end_tests

You can also specify a device to run end2end tests using "-d" option, for example:

> ./out/Release/webnn_end2end_tests -d gpu

Currently "cpu", "gpu" and "default" are supported, more devices are to be supported in the future.

Notes:

Run examples

License

Apache 2.0 Public License, please see LICENSE.