-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorRT INT8 calibration doesn't work with TF r1.12 and TRT 5RC #22854
Comments
@dhingratul, Are you passing a graph object or a graphdef object? The method expects a graphdef. |
@samikama This is all what I have tried
For running the graph through calibration data, i need to import it as graph, I don't know if there is a right way to get back to graphdef apart from 3 |
@dhingratul, are you running calib_to_infer_graph in the same process? if you exit the process between calibration and baking of the calibration? Also you need to pass graph that returned to you from trt.create_inference_graph() in the first step to calib_graph_to_infer_graph() in the third step not the graph from tf.train.write_graph(). You need to import it as graph to be able to run.
|
@samikama I get this workflow, but In order to run calibration on graphdef generated in 1, I need to import the graphdef generated as a graph and then use sess.run to complete calibration. Now I have a graph, not a graphdef, so the only way to pass graphdef to the calib_graph_to_infer_graph() is to export it as .pb and import graphdef. I donot know how to convert a graph to a graphdef to go from step 2 to step3 |
@dhingratul, You are trying to pass the graph in step 2 in to the step 3. Please pass the graphdef object that you created in step 1 and used to import into the graph, not the serialized graphdef of the graph in the second step. |
@samikama Based on your recommendation, this is the workflow I have, but i am facing a weird CUBLAS issue now
|
@dhingratul You may want to check whether there are other processes running on GPU. |
@wt-huang No other processes running on the GPU |
@dhingratul Could you try to install TensorFlow from binary instead? Also use cuDNN 7.3 and Python 3.6. Make sure that cuBLAS library are correctly installed. You can also provide your env by running the script in the issue template. |
@wt-huang I have TensorRT tarballs, not .deb hence i don't know how to install tf with pip and provide the correct path to my TRT. Can you expand on this |
@dhingratul One thing I found very useful is to first check if an trt_engine_opts = len([1 for n in trt_graph.node if str(n.op) == 'TRTEngineOp'])
print('TRT Engine Op: {}'.format(trt_engine_opts))
assert trt_engine_opts > 0, 'No TRT Engine Ops!' |
@benjamintanweihao Had that been the case, i would have gotten error like this, #21850 (comment) |
Hi, any resolutions to this? I am getting the same issue with nvidia docker 19.01-py2 |
@dhingratul Is this still an issue? |
I will have to reproduce the issue with the TRT 5GA, it was still an issue until TRT 5RC |
Still having problem of |
I have same problem with tf 1.15 tensorrt 6. |
Hi @dhingratul ! |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
Please go to Stack Overflow for help and support:
https://stackoverflow.com/questions/tagged/tensorflow
If you open a GitHub issue, here is our policy:
Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
System information
Follow workflow from here, https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/
Error at the following piece of code
trt_graph=trt.calib_graph_to_infer_graph(calibGraph)
Log:
File "/home/dhingratul/.virtualenvs/tf_trt_source_trt5rc_tf1_12/local/lib/python3.5/site-packages/tensorflow/contrib/tensorrt/python/trt_convert.py", line 349, in calib_graph_to_infer_graph
for n in calibration_graph_def.node:
AttributeError: 'Graph' object has no attribute 'node'
You can collect some of this information using our environment capture script:
https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh
You can obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
The text was updated successfully, but these errors were encountered: