nGraph is an open source C++ library, compiler and runtime for Deep Learning frameworks
Switch branches/tags
Clone or download
Failed to load latest commit information.
.ci Clone ONNX models only if they were generated Sep 18, 2018
cmake Enable building without CPU without triggering mkldnn or eigen builds ( Sep 24, 2018
contrib/docker fix Sphinx version for Jenkins and update nG versioning (#1640) Sep 19, 2018
doc Testing with searchtools js from zephyr project (#1647) Sep 21, 2018
licenses updating ngraph theme with IntelNeoSans font on headings, better OFL … Mar 23, 2018
maint set correct perms on source files (#1564) Sep 11, 2018
python Update fusion doc and add ONNX build flag to buildlb doc (#1585) Sep 12, 2018
src Enable building without CPU without triggering mkldnn or eigen builds ( Sep 24, 2018
test Add CPU horizontal fusion pass for inception. (#1577) Sep 21, 2018
.clang-format Use weak_ptr for node in inputs/outputs, turn off alignment style. Oct 3, 2017
.gitattributes Normalize line endings (#1649) Sep 20, 2018
.gitignore Public documentation version release info updated and confirmed (#1458) Aug 22, 2018
.gitmodules Silee2/single repo (#646) Mar 16, 2018
.travis.yml Travis tests bugfix (#1550) Sep 3, 2018
CMakeLists.txt Cmake flags update (#1539) Sep 4, 2018 Draft of updates for JIRA tasks WIP (#1227) Jul 14, 2018 removed contrib/docker/Dockerfile for gcc 4.8 for Ubuntu 16.04 - not … Mar 15, 2018
LICENSE Add LICENSE and switch to Intel Copyright (#466) Feb 8, 2018 Update doc build v and fix doc on captioning (#1568) Sep 6, 2018 Auto generate version number and apply it to install dir and libngrap… May 3, 2018 Validate/infer types as a virtual function (#1463) Aug 31, 2018

nGraph Library Build Status

Welcome to the open-source repository for the Intel® nGraph™ Library. Our code base provides a Compiler and runtime suite of tools (APIs) designed to give developers maximum flexibility for their software design, allowing them to create or customize a scalable solution using any framework while also avoiding device-level hardware lock-in that is so common with many AI vendors. A neural network model compiled with nGraph can run on any of our currently-supported backends, and it will be able to run on any backends we support in the future with minimal disruption to your model. With nGraph, you can co-evolve your software and hardware's capabilities to stay at the forefront of your industry.

The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks. Documentation in this repo describes how you can program any framework to run training and inference computations on a variety of Backends including Intel® Architecture Processors (CPUs), Intel® Nervana™ Neural Network Processors (NNPs), cuDNN-compatible graphics cards (GPUs), custom VPUs like Movidius, and many others. The default CPU Backend also provides an interactive Interpreter mode that can be used to zero in on a DL model and create custom nGraph optimizations that can be used to further accelerate training or inference, in whatever scenario you need.

nGraph provides both a C++ API for framework developers and a Python API which can run inference on models imported from ONNX.

nGraph ecosystem

Framework bridge available? ONNX support?
neon yes yes
MXNet* yes yes
TensorFlow* yes yes
PyTorch* not yet yes
Chainer* not yet yes
CNTK* not yet yes
Caffe2* not yet yes


See our install docs for how to get started.

For this early release, we provide framework integration guides to compile MXNet and TensorFlow-based projects. If you already have a trained model, we've put together a getting started guide for how to import a deep learning model and start working with the nGraph APIs.


Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to nGraph. If you have an idea how to improve the Library:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: our Travis-CI service runs only on a CPU backend on Linux. We will run additional tests in other environments.
  • We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.