Home

Awesome

nGraph has moved to OpenVINO: https://github.com/openvinotoolkit/openvino

nGraph Compiler stack

<div align="left"> <h4> <a href="./ABOUT.md">Architecture &amp; features</a><span> | </span> <a href="./ecosystem-overview.md" >Ecosystem</a><span> | </span> <a href="https://www.ngraph.ai/documentation/project/release-notes">Release notes</a><span> | </span> <a href="https://www.ngraph.ai/documentation">Documentation</a><span> | </span> <a href="#How-to-contribute" >Contribution guide</a><span> | </span> <a href="https://github.com/NervanaSystems/ngraph/blob/master/LICENSE">License: Apache 2.0</a> </h4> </div>

Quick start

To begin using nGraph with popular frameworks, please refer to the links below.

Framework (Version)Installation guideNotes
TensorFlow*Pip install or Build from source20 Validated workloads
ONNX 1.5Pip install17 Validated workloads

Python wheels for nGraph

The Python wheels for nGraph have been tested and are supported on the following 64-bit systems:

To install via pip, run:

pip install --upgrade pip==19.3.1
pip install ngraph-core

Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of
Validated workloads, thanks to nGraph's powerful subgraph pattern matching.

Additionally we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. More details on current architecture of the nGraph Compiler stack can be found in Architecture and features, and recent changes to the stack are explained in the Release Notes.

What is nGraph Compiler?

nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets. We strongly believe in providing freedom, performance, and ease-of-use to AI developers.

The diagram below shows deep learning frameworks and hardware targets supported by nGraph. NNP-T and NNP-I in the diagram refer to Intel's next generation deep learning accelerators: Intel® Nervana™ Neural Network Processor for Training and Inference respectively. Future plans for supporting addtional deep learning frameworks and backends are outlined in the ecosystem section.

Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and to interact with supported backends. If you wish to contribute to the project, please don't hesitate to ask questions in GitHub issues after reviewing our contribution guide below.

How to contribute

We welcome community contributions to nGraph. If you have an idea how to improve it:

We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.