New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error loading a TensorRT optimised graph #28854
Comments
Just to verify did you get chance to have a look on #22360. Which TensorFlow version you are using? |
Thank you for the response. I am using TF r.1.13 On a side note, I posted this question on nvidia devtalk and they answered A generated TensorRT PLAN is valid for a specific GPU — more precisely, a specific CUDA Compute Capability. For example, if you generate a PLAN for an NVIDIA P4 (compute capability 6.1) you can’t use that PLAN on an NVIDIA Tesla V100 (compute capability 7.0).This is quite confusing because there are articles online on optimising on one GPU and running on another. |
Just noticed that there is a TF version mismatch between the one on my system (1.13) and the one on the GCP VM (1.12). Does this affect the result? |
Tried again with a new model. Same error. |
Which CUDA/cuDNN version you are using ? |
On my local system it is CUDA 10.1 and cuDNN 7.4.2 |
Please help us with some more info as in whether you are getting this error on TensorFlow installed on your GCP VM or on local system. Which operating system you are using and whether you have installed TensorFlow from source or binary. If you are unclear about the template, you can refer this link. Also kindly verify whether you have followed the instruction from TensorFlow website based on information provided in the template. Thanks! |
Hi. I run the
I try loading the graph for inference on the VM and it works fine. I try loading the graph on my local system
and get the error
I did not build TF from source. I installed it using pip3 in the terminal.
According to a moderator on the nvidia devtalk forum A generated TensorRT PLAN is valid for a specific GPU — more precisely, a specific CUDA Compute Capability. For example, if you generate a PLAN for an NVIDIA P4 (compute capability 6.1) you can’t use that PLAN on an NVIDIA Tesla V100 (compute capability 7.0). |
Hi @fuzzyBatman could you try adding:
to your loading script to see if it works? |
@aaroey Same error. Does the GPU choice not affect this? |
@fuzzyBatman could you share your full script, I'll try and let you know. |
I have a TF frozen graph (.pb extension). I load it and run the
The above script provides the file |
@fuzzyBatman sorry I was not able to get to this. Thanks for the scripts, it looks legit to me. By Also, 1.15.0rc1 is out and 1.15.0 will be out soon, you may want to try with that. Also feel free to provide the |
@fuzzyBatman We are checking to see if you still need help on this issue, as you are using an older version of tensorflow(1.x) which is officially considered as end of life. We recommend that you upgrade to 2.6 which is latest stable version of TF and let us know if the issue still persists in newer versions. we will get you the right help.Thanks! |
Hi! I stopped working on that project a year ago |
I was able to convert a frozen model using the tensorRT API on a Nvidia Tesla P100 on Debian 9 using the command
I am able to load the graph on the same system. However, when I try to load the graph on my local system which has an Nvidia GeForce GTX 1050M I get the following error.
Is it because my GPU lacks support for TensorRT?
The text was updated successfully, but these errors were encountered: