Skip to content
nGraph is an open source C++ library, compiler and runtime for Deep Learning frameworks
Branch: master
Clone or download
ayzhuang and diyessi Fix debugger tests crash for CODEGEN. (#2582)
* Fix debugger tests crash for CODEGEN.

* Address PR feedback.

* Expect exception for CODEGEN.

* Address PR feedback.
Latest commit 75fadde Mar 19, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.ci update licenses for 2019 (#2275) Jan 3, 2019
cmake
contrib/docker fix a few typos (#2451) Feb 14, 2019
doc Update sitemap.rst (#2631) Mar 18, 2019
licenses
maint
python Silee2/fix prebuilt mkldnn (#2575) Mar 9, 2019
src Fix debugger tests crash for CODEGEN. (#2582) Mar 19, 2019
test Fix debugger tests crash for CODEGEN. (#2582) Mar 19, 2019
.clang-format
.gitattributes Normalize line endings (#1649) Sep 20, 2018
.gitignore
.gitmodules Silee2/single repo (#646) Mar 16, 2018
.travis.yml
ABOUT.md Leona/doc logo (#2565) Mar 6, 2019
CMakeLists.txt Enable onnx, protobuf and onnx import on Windows. (#2621) Mar 18, 2019
CODEOWNERS Add Alex to cmake owners (#2510) Feb 28, 2019
CONTRIB.md Architecture and feature docs2 (#2038) Nov 20, 2018
LICENSE
README.md
VERSION.in
changes.md Rename runtime::TensorView to runtime::Tensor (#1699) Sep 29, 2018
ecosystem-overview.md

README.md

nGraph Compiler stack License Build Status

Quick start

To begin using nGraph with popular frameworks to accelerate deep learning workloads on CPU for inference, please refer to the links below.

Framework (Version) Installation guide Notes
TensorFlow* 1.12 Pip install or Build from source 20 Validated workloads
MXNet* 1.3 Pip install or Build from source 18 Validated workloads
ONNX 1.3 Pip install 14 Validated workloads

Python wheels for nGraph

The Python wheels for nGraph have been tested and are supported on the following 64-bit systems

  • Ubuntu 16.04 or later
  • CentOS 7.6
  • Debian 10
  • macOS 10.14.3 (Mojave)

Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of
Validated workloads, thanks to nGraph's powerful subgraph pattern matching.

Additionally we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. More details on current architecture of the nGraph Compiler stack can be found in Architecture and features, and recent changes to the stack are explained in Release Notes.

What is nGraph Compiler?

nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets. We strongly believe in providing freedom, performance, and ease-of-use to AI developers.

The diagram below shows deep learning frameworks and hardware targets supported by nGraph. NNP-L and NNP-I in the diagram refer to Intel's next generation deep learning accelerators: Intel® Nervana™ Neural Network Processor for Learning and Inference respectively. Future plans for supporting addtional deep learning frameworks and backends are outlined in the ecosystem section.

While the ecosystem shown above is all functioning, we have validated performance for deep learning inference on CPU processors, such as Intel® Xeon® for the Beta release of nGraph. The Gold release is targeted for June 2019; it will feature broader workload coverage including quantized graphs (int8) and will implement support for dynamic shapes.

Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and to interact with supported backends. If you wish to contribute to the project, please don't hesitate to ask questions in GitHub issues after reviewing our contribution guide below.

How to contribute

We welcome community contributions to nGraph. If you have an idea how to improve it:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: our Travis-CI service runs only on a CPU backend on Linux. We will run additional tests in other environments.
  • We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.
You can’t perform that action at this time.