Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick updated TensorRT instructions. #19276

Merged
merged 1 commit into from
May 17, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
60 changes: 15 additions & 45 deletions tensorflow/contrib/tensorrt/README.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,29 @@
# Using TensorRT in TensorFlow


This module provides necessary bindings and introduces TRT_engine_op
operator that wraps a subgraph in TensorRT. This is still a work in progress
but should be useable with most common graphs.
This module provides necessary bindings and introduces TRT_engine_op operator
that wraps a subgraph in TensorRT. This is still a work in progress but should
be useable with most common graphs.

## Compilation


In order to compile the module, you need to have a local TensorRT
installation ( libnvinfer.so and respective include files ). During the
configuration step, TensorRT should be enabled and installation path
should be set. If installed through package managers (deb,rpm),
configure script should find the necessary components from the system
automatically. If installed from tar packages, user has to set path to
location where the library is installed during configuration.
In order to compile the module, you need to have a local TensorRT installation
(libnvinfer.so and respective include files). During the configuration step,
TensorRT should be enabled and installation path should be set. If installed
through package managers (deb,rpm), configure script should find the necessary
components from the system automatically. If installed from tar packages, user
has to set path to location where the library is installed during configuration.

```shell
bazel build --config=cuda --config=opt //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/
```

After the installation of tensorflow package, TensorRT transformation
will be available. An example use can be found in test/test_tftrt.py script
After the installation of tensorflow package, TensorRT transformation will be
available. An example use can be found in test/test_tftrt.py script

## Installing TensorRT 3.0.4

In order to make use of TensorRT integration, you will need a local installation of TensorRT 3.0.4 from the [NVIDIA Developer website](https://developer.nvidia.com/tensorrt). Due to compiler compatibility, you will need to download and install the TensorRT 3.0.4 tarball for _Ubuntu 14.04_, i.e., **_TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz_**, even if you are using Ubuntu 16.04 or later.

### Preparing TensorRT installation

Once you have downloaded TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz, you will need to unpack it to an installation directory, which will be referred to as <install_dir>. Please replace <install_dir> with the full path of actual installation directory you choose in commands below.

```shell
cd <install_dir> && tar -zxf /path/to/TensorRT-3.0.4.Ubuntu-14.04.5.x86_64.cuda-9.0.cudnn7.0-tar.gz
```

After unpacking the binaries, you have several options to use them:

#### To run TensorFlow as a user without superuser privileges

For a regular user without any sudo rights, you should add TensorRT to your `$LD_LIBRARY_PATH`:

```shell
export LD_LIBRARY_PATH=<install_dir>/TensorRT-3.0.4/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
```

Then you are ready to use TensorFlow-TensorRT integration. `$LD_LIBRARY_PATH` must contain the path to TensorRT installation for TensorFlow-TensorRT integration to work. If you are using a VirtualEnv-like setup, you can add the command above to your `bin/activate` script or to your `.bashrc` script.

#### To run TensorFlow as a superuser

When running as a superuser, such as in a container or via sudo, the `$LD_LIBRARY_PATH` approach above may not work. The following is preferred when the user has superuser privileges:

```shell
echo "<install_dir>/TensorRT-3.0.4/lib" | sudo tee /etc/ld.so.conf.d/tensorrt304.conf && sudo ldconfig
```

Please ensure that any existing deb package installation of TensorRT is removed before following these instructions to avoid package conflicts.
In order to make use of TensorRT integration, you will need a local installation
of TensorRT 3.0.4 from the [NVIDIA Developer website](https://developer.nvidia.com/tensorrt).
Installation instructions for compatibility with TensorFlow are provided on the
[TensorFlow Installation page](https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support).
36 changes: 29 additions & 7 deletions tensorflow/docs_src/install/install_linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,16 +65,38 @@ must be installed on your system:
<pre>
$ <b>sudo apt-get install libcupti-dev</b>
</pre>

* **[OPTIONAL]** For optimized inferencing performance, you can also install
NVIDIA TensorRT 3.0. For details, see
[NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-tar).
Only steps 1-4 in the TensorRT Tar File installation instructions are
required for compatibility with TensorFlow; the Python package installation
in steps 5 and 6 can be omitted. Detailed installation instructions can be found at [package documentataion](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#installing-tensorrt-304)
**NVIDIA TensorRT 3.0**. The minimal set of TensorRT runtime components needed
for use with the pre-built `tensorflow-gpu` package can be installed as follows:

<pre>
$ <b>wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64/nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb</b>
$ <b>sudo dpkg -i nvinfer-runtime-trt-repo-ubuntu1404-3.0.4-ga-cuda9.0_1.0-1_amd64.deb</b>
$ <b>sudo apt-get update</b>
$ <b>sudo apt-get install -y --allow-downgrades libnvinfer-dev libcudnn7-dev=7.0.5.15-1+cuda9.0 libcudnn7=7.0.5.15-1+cuda9.0</b>
</pre>

**IMPORTANT:** For compatibility with the pre-built `tensorflow-gpu`
package, please use the Ubuntu **14.04** tar file package of TensorRT
even when installing onto an Ubuntu 16.04 system.
package, please use the Ubuntu **14.04** package of TensorRT as shown above,
even when installing onto an Ubuntu 16.04 system.<br/>
<br/>
To build the TensorFlow-TensorRT integration module from source rather than
using pre-built binaries, see the [module documentation](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/tensorrt#using-tensorrt-in-tensorflow).
For detailed TensorRT installation instructions, see [NVIDIA's TensorRT documentation](http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html).<br/>
<br/>
To avoid cuDNN version conflicts during later system upgrades, you can hold
the cuDNN version at 7.0.5:

<pre>
$ <b> sudo apt-mark hold libcudnn7 libcudnn7-dev</b>
</pre>

To later allow upgrades, you can remove the hold:

<pre>
$ <b> sudo apt-mark unhold libcudnn7 libcudnn7-dev</b>
</pre>

If you have an earlier version of the preceding packages, please upgrade to
the specified versions. If upgrading is not possible, then you may still run
Expand Down