Skip to content

Commit

Permalink
Bug bash (#43)
Browse files Browse the repository at this point in the history
* Update README.md

* Update Versioning.md

* Update rename_manylinux.sh

Remove duplicate word

* Update README.md

Remove a 'the' as ONNX Runtime is a proper noun.

* Update CUDA version to 9.1 cudnn version to 7.1

* Update ReleaseManagement.md

* put tensorflow copy-right headers

there are around 10 lines of code is borrowed from tflite.

* Update README.md

Mention C++ API

* Update README.md

Fix link

* Update C_API.md

Fix broken link to onnxruntime_c_api.h

* Update ABI.md

Delete mention of COM and fix 'ONNX Runtime' to be two words

* Update README.md

* Update README.md

* Update C_API.md
  • Loading branch information
RyanUnderhill authored and pranavsharma committed Nov 28, 2018
1 parent a9b52f3 commit 3b11409
Show file tree
Hide file tree
Showing 8 changed files with 46 additions and 50 deletions.
4 changes: 2 additions & 2 deletions BUILD.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,15 +49,15 @@ The complete list of build options can be found by running `./build.sh (or ./bui
| Linux_CI_Dev | Ubuntu 16.04 | python=3.5 | Unit tests; ONNXModelZoo | [script](tools/ci_build/github/linux/run_build.sh) |
| Linux_CI_GPU_Dev | Ubuntu 16.04 | python=3.5; nvidia-docker | Unit tests; ONNXModelZoo | [script](tools/ci_build/github/linux/run_build.sh) |
| Windows_CI_Dev | Windows Server 2016 | python=3.5 | Unit tests; ONNXModelZoo | [script](build.bat) |
| Windows_CI_GPU_Dev | Windows Server 2016 | cuda=9.0; cudnn=7.0; python=3.5 | Unit tests; ONNXModelZoo | [script](build.bat) |
| Windows_CI_GPU_Dev | Windows Server 2016 | cuda=9.1; cudnn=7.1; python=3.5 | Unit tests; ONNXModelZoo | [script](build.bat) |

## Additional Build Flavors
The complete list of build flavors can be seen by running `./build.sh --help` or `./build.bat --help`. Here are some common flavors.

### Windows CUDA Build
ONNX Runtime supports CUDA builds. You will need to download and install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [CUDNN](https://developer.nvidia.com/cudnn).

ONNX Runtime is built and tested with CUDA 9.0 and CUDNN 7.0 using the Visual Studio 2017 14.11 toolset (i.e. Visual Studio 2017 v15.3).
ONNX Runtime is built and tested with CUDA 9.1 and CUDNN 7.1 using the Visual Studio 2017 14.11 toolset (i.e. Visual Studio 2017 v15.3).
CUDA versions up to 9.2 and CUDNN version 7.1 should also work with versions of Visual Studio 2017 up to and including v15.7, however you may need to explicitly install and use the 14.11 toolset due to CUDA and CUDNN only being compatible with earlier versions of Visual Studio 2017.

To install the Visual Studio 2017 14.11 toolset, see <https://blogs.msdn.microsoft.com/vcblog/2017/11/15/side-by-side-minor-version-msvc-toolsets-in-visual-studio-2017/>
Expand Down
23 changes: 13 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,30 +15,33 @@ In order to support popular and leading AI models, the runtime stays up-to-date

## Cross Platform
ONNX Runtime offers:
* APIs for Python, C#, and C
* APIs for Python, C#, and C (experimental)
* Available for Linux, Windows, and Mac 

See API documentation and package installation instructions [below](#Installation).

## High Performance
You can use the ONNX Runtime with both CPU and GPU hardware. You can also plug in additional execution providers to ONNX Runtime. With many graph optimizations and various accelerators, ONNX Runtime can often provide lower latency and higher efficiency compared to other runtimes. This provides smoother end-to-end customer experiences and lower costs from improved machine utilization.
You can use ONNX Runtime with both CPU and GPU hardware. You can also plug in additional execution providers to ONNX Runtime. With many graph optimizations and various accelerators, ONNX Runtime can often provide lower latency and higher efficiency compared to other runtimes. This provides smoother end-to-end customer experiences and lower costs from improved machine utilization.

Currently ONNX Runtime supports CUDA, MKL, and MKL-DNN for computation acceleration, with more coming soon. To add an execution provider, please refer to [this page](docs/AddingExecutionProvider.md).
Currently ONNX Runtime supports CUDA and MKL-DNN (with option to build with MKL) for computation acceleration, with more coming soon. To add an execution provider, please refer to [this page](docs/AddingExecutionProvider.md).

# Getting Started
If you need a model:
* Check out the [ONNX Model Zoo](https://github.com/onnx/models) for ready-to-use pre-trained models.
* To get an ONNX model by exporting from various frameworks, see [ONNX Tutorials](https://github.com/onnx/tutorials).

If you already have an ONNX model, just [install the runtime](#Installation) for your machine to try it out. One easy way to operationalize the model on the cloud is by using [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service). See a how-to guide [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx).
If you already have an ONNX model, just [install the runtime](#Installation) for your machine to try it out. One easy way to deploy the model on the cloud is by using [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service). See detailed instructions [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx).

# Installation
## APIs and Official Builds
| API Documentation | CPU package | GPU package |
|-----|-------------|-------------|
| [Python](https://docs.microsoft.com/en-us/python/api/overview/azure/onnx/intro?view=azure-onnx-py) | [Windows](TODO)<br>[Linux](https://pypi.org/project/onnxruntime/)<br>[Mac](TODO)| [Windows](TODO)<br>[Linux](https://pypi.org/project/onnxruntime-gpu/) |
| [Python](https://docs.microsoft.com/en-us/python/api/overview/azure/onnx/intro?view=azure-onnx-py) | [Windows](https://pypi.org/project/onnxruntime/)<br>[Linux](https://pypi.org/project/onnxruntime/)<br>[Mac](https://pypi.org/project/onnxruntime/)| [Windows](https://pypi.org/project/onnxruntime-gpu)<br>[Linux](https://pypi.org/project/onnxruntime-gpu/) |
| [C#](docs/CSharp_API.md) | [Windows](TODO)<br>Linux - Coming Soon<br>Mac - Coming Soon| Coming Soon |
| [C](docs/C_API.md) | [Windows](TODO)<br>[Linux](TODO) | Coming Soon |
| [C (experimental)](docs/C_API.md) | Coming Soon | Coming Soon |

<br><br>
ONNX Runtime also provides a non ABI [C++ API](onnxruntime/core/session/inference_session.h)

## Build Details
For details on the build configurations and information on how to create a build, see [Build ONNX Runtime](BUILD.md).
Expand All @@ -51,11 +54,11 @@ For an overview of the high level architecture and key decisions in the technica

ONNX Runtime is built with an extensible design that makes it versatile to support a wide array of models with high performance.

* [Add a custom operator/kernel](AddingCustomOp.md)
* [Add an execution provider](AddingExecutionProvider.md)
* [Add a custom operator/kernel](docs/AddingCustomOp.md)
* [Add an execution provider](docs/AddingExecutionProvider.md)
* [Add a new graph
transform](../include/onnxruntime/core/graph/graph_transformer.h)
* [Add a new rewrite rule](../include/onnxruntime/core/graph/rewrite_rule.h)
transform](include/onnxruntime/core/graph/graph_transformer.h)
* [Add a new rewrite rule](include/onnxruntime/core/graph/rewrite_rule.h)

# Contribute
We welcome your contributions! Please see the [contribution guidelines](CONTRIBUTING.md).
Expand Down
25 changes: 11 additions & 14 deletions docs/ABI.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ONNXRuntime ABI
# ONNX Runtime ABI

We release ONNXRuntime as both static library and shared library on Windows, Linux and Mac OS X. [ABI (Application Binary Interface)](https://en.wikipedia.org/wiki/Application_binary_interface) is only for the shared library. It allows you upgrade ONNXRuntime to a newer version without recompiling.
We release ONNX Runtime as both static library and shared library on Windows, Linux and Mac OS X. [ABI (Application Binary Interface)](https://en.wikipedia.org/wiki/Application_binary_interface) is only for the shared library. It allows you upgrade ONNX Runtime to a newer version without recompiling.

The ABI contains:

Expand All @@ -11,36 +11,33 @@ The ABI contains:
[C API](C_API.md)

# Integration
Q: Should I statically link to ONNXRuntime or dynamically?
A: On Windows, Any custom op DLL must dynamically link to ONNXRuntime.
Dynamical linking also helps on solving diamond dependency problem. For example, if part of your program depends on ONNX 1.2 but ONNXRuntime depends on ONNX 1.3, then dynamically linking to them would be better.
Q: Should I statically link to ONNX Runtime or dynamically?
A: On Windows, Any custom op DLL must dynamically link to ONNX Runtime.
Dynamical linking also helps on solving diamond dependency problem. For example, if part of your program depends on ONNX 1.2 but ONNX Runtime depends on ONNX 1.3, then dynamically linking to them would be better.

Q: Any requirement on CUDA version? My program depends on CUDA 9.0, but the ONNXRuntime binary was built with CUDA 9.1. Is it ok to put them together?
A: Yes. Because ONNXRuntime statically linked to CUDA.
Q: Any requirement on CUDA version? My program depends on CUDA 9.0, but the ONNX Runtime binary was built with CUDA 9.1. Is it ok to put them together?
A: Yes. Because ONNX Runtime statically linked to CUDA.

# Dev Notes

## Global Variables
Global variables may get constructed or destructed inside "DllMain". There are significant limits on what you can safely do in a DLL entry point. See ['DLL General Best Practices'](https://docs.microsoft.com/en-us/windows/desktop/dlls/dynamic-link-library-best-practices). For example, you can't put a ONNXRuntime InferenceSession into a global variable.

## Component Object Model (COM)
ONNXRuntime doesn't contain a COM interface, whether it's on Windows or Linux. Because .Net Core doesn't support COM on Linux and we need to make ONNXRuntime available to .Net Core.
Global variables may get constructed or destructed inside "DllMain". There are significant limits on what you can safely do in a DLL entry point. See ['DLL General Best Practices'](https://docs.microsoft.com/en-us/windows/desktop/dlls/dynamic-link-library-best-practices). For example, you can't put a ONNX Runtime InferenceSession into a global variable.

## Undefined symbols in a shared library
On Windows, you can't build a DLL with undefined symbols. Every symbol must be get resolved at link time. On Linux, you can.

In this project, we setup a rule: when building a shared library, every symbol must get resolved at link time, unless it's a custom op.

For custom op, on Linux, don't pass any libraries(except libc, pthreads) to linker. So that, even the application is statically linked to ONNXRuntime, they can still use the same custom op binary.
For custom op, on Linux, don't pass any libraries(except libc, pthreads) to linker. So that, even the application is statically linked to ONNX Runtime, they can still use the same custom op binary.


## Default visibility
On POSIX systems, please always specify "-fvisibility=hidden" and "-fPIC" when compiling any code in ONNXRuntime shared library.
On POSIX systems, please always specify "-fvisibility=hidden" and "-fPIC" when compiling any code in ONNX Runtime shared library.

See [pybind11 FAQ](https://github.com/pybind/pybind11/blob/master/docs/faq.rst#someclass-declared-with-greater-visibility-than-the-type-of-its-field-someclassmember--wattributes)


## RTLD_LOCAL vs RTLD_GLOBAL
RTLD_LOCAL and RTLD_GLOBAL are two flags of [dlopen(3)](http://pubs.opengroup.org/onlinepubs/9699919799/functions/dlopen.html) function on Linux. By default, it's RTLD_LOCAL. And basically you can say, there no corresponding things like RTLD_GLOBAL on Windows.

If your application is a shared library, which statically linked to ONNXRuntime, and your application needs to dynamically load a custom op, then your application must be loaded with RTLD_GLOBAL. In all other cases, you should use RTLD_LOCAL. ONNXRuntime python binding is a good example of why sometimes RTLD_GLOBAL is needed.
If your application is a shared library, which statically linked to ONNX Runtime, and your application needs to dynamically load a custom op, then your application must be loaded with RTLD_GLOBAL. In all other cases, you should use RTLD_LOCAL. ONNX Runtime python binding is a good example of why sometimes RTLD_GLOBAL is needed.
17 changes: 3 additions & 14 deletions docs/C_API.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,9 @@
# C API

# Q: Why have a C API?
Q: Why not just live in a C++ world? Why C?
A: We want to distribute the onnxruntime as a DLL, which can be used in .Net languages through [P/Invoke](https://docs.microsoft.com/en-us/cpp/dotnet/how-to-call-native-dlls-from-managed-code-using-pinvoke).
This is the only option we have.
**NOTE: The C API is PRE-RELEASE and subject to change. Please do not rely on this file not changing.**

Q: Is it only for .Net?
A: No. It is designed for:
1. Creating language bindings for the onnxruntime. e.g. C#, python, java, ...
2. Dynamic linking has some benefits. For example, solving diamond dependency problems.
## Features

Q: Can I export C++ types and functions across DLL or "Shared Object" Library(.so) boundaries?
A: Well, you can, but it's not a good practice. We won't do it in this project.


## What's inside
* Creating an InferenceSession from an on-disk model file and a set of SessionOptions.
* Registering customized loggers.
* Registering customized allocators.
Expand All @@ -26,7 +15,7 @@ A: Well, you can, but it's not a good practice. We won't do it in this project.

## How to use it

1. Include [onnxruntime_c_api.h](include/onnxruntime/core/session/onnxruntime_c_api.h).
1. Include [onnxruntime_c_api.h](/include/onnxruntime/core/session/onnxruntime_c_api.h).
2. Call ONNXRuntimeInitialize
3. Create Session: ONNXRuntimeCreateInferenceSession(env, model_uri, nullptr,...)
4. Create Tensor
Expand Down
9 changes: 1 addition & 8 deletions docs/ReleaseManagement.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,4 @@
# Release Management

This describes the process by which versions of ONNX Runtime are officially
released to the public.

## Releases
Releases are versioned according to
[docs/Versioning.md](Versioning/md). We plan to release ONNX Runtime packages
every 6 months.

(TBD: Add more here later)
[Versioning](Versioning/md). Official releases of ONNX Runtime are managed by the core ONNX Runtime team and packages will be published at least every 6 months.
2 changes: 1 addition & 1 deletion docs/Versioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The version number of the current stable release can be found
[here](../VERSION_NUMBER)

## Release cadence
See [docs/ReleaseManagement.md](ReleaseManagement.md)
See [Release Management](ReleaseManagement.md)

## Compatibility with ONNX opsets
ONNX Runtime supports both backwards and forward compatibility.
Expand Down
14 changes: 14 additions & 0 deletions onnxruntime/core/framework/mem_pattern_planner.h
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
//Part of the algo is derived from tensorflow.

/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

Expand Down
2 changes: 1 addition & 1 deletion rename_manylinux.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.

# hack script to modify modify whl as manylinux whl
# hack script to modify whl as manylinux whl
whl=(*whl)
renamed_whl=`echo $whl | sed --expression='s/linux/manylinux1/g'`
basename=`echo $whl | awk -F'-cp3' '{print $1}'`
Expand Down

0 comments on commit 3b11409

Please sign in to comment.