Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorrt not working with cuda 11.2 #46269

Closed
alanpurple opened this issue Jan 8, 2021 · 7 comments
Closed

tensorrt not working with cuda 11.2 #46269

alanpurple opened this issue Jan 8, 2021 · 7 comments
Assignees
Labels
comp:gpu:tensorrt Issues specific to TensorRT stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.4 for issues related to TF 2.4 type:bug Bug

Comments

@alanpurple
Copy link
Contributor

Please make sure that this is a bug. As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 2.4.0
  • Python version: 3.9.1
  • Bazel version (if compiling from source): 3.7.2
  • GCC/Compiler version (if compiling from source): 7.5.0
  • CUDA/cuDNN version: 11.2 / 8.0.5
  • GPU model and memory: GTX1080Ti 11GB

You can collect some of this information using our environment capture
script
You can also obtain the TensorFlow version with:

  1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
  2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior
tf workfs fine, but tftrt no

Describe the expected behavior

2021-01-08 10:32:10.702646: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvrtc.so.11.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/local/cuda/lib64:

after add soft link

2021-01-08 10:41:23.887753: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: /usr/local/cuda/lib64/libnvrtc.so.11.1: version `libnvrtc.so.11.1' not found (required by /usr/lib/x86_64-linux-gnu/libnvinfer.so.7); LD_LIBRARY_PATH: /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/local/cuda/lib64:

Standalone code to reproduce the issue
any tensorrt example code

Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.

@amahendrakar
Copy link
Contributor

@alanpurple,
In order to expedite the trouble-shooting process, could you please provide a minimal code snippet to reproduce the issue reported here along with the complete error log.

Also, please check if you are facing the same issue with CUDA 11.0 and cuDNN 8 as well? Thanks!

@amahendrakar amahendrakar added comp:gpu:tensorrt Issues specific to TensorRT stat:awaiting response Status - Awaiting response from author TF 2.4 for issues related to TF 2.4 labels Jan 8, 2021
@alanpurple
Copy link
Contributor Author

no issue with cuda 11.1, cudna 8.0.5 and tensorrt 7.2.2.3

minimal code is

import tensorflow.experimental.tensorrt as trt
conversion_params = trt.ConversionParams(max_workspace_size_bytes=(1<<32),precision_mode="FP16",maximum_cached_engines=100)
converter = trt.Converter(
    input_saved_model_dir='./testmodel/1',
    conversion_params=conversion_params)
converter.convert()
def my_input_fn():
    for x,y in train_ds.take(1000):
        yield (x,)
converter.build(input_fn=my_input_fn)
converter.save('./trt3')

something like this

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Jan 10, 2021
@amahendrakar
Copy link
Contributor

no issue with cuda 11.1, cudna 8.0.5 and tensorrt 7.2.2.3

@alanpurple,
Thank you for the update. Since TensorFlow v2.4 is built and tested against CUDA 11.0 and cuDNN 8, I'd suggest you to use CUDA 11.0 for now.

Support for CUDA 11.2 is already being tracked in issue #46093. Please feel free to close the issue if resolved. Thanks!

@amahendrakar amahendrakar added the stat:awaiting response Status - Awaiting response from author label Jan 12, 2021
@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

@google-ml-butler google-ml-butler bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Jan 19, 2021
@google-ml-butler
Copy link

Closing as stale. Please reopen if you'd like to work on this further.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@chsigg
Copy link
Contributor

chsigg commented Jul 8, 2021

Drive by comment: you need to install libnvrtc.so.11.1, see NVIDIA/TensorRT#1064

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:gpu:tensorrt Issues specific to TensorRT stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.4 for issues related to TF 2.4 type:bug Bug
Projects
None yet
Development

No branches or pull requests

4 participants