Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TrtGraphConverterV2 does not preserve output names in the signature_def #28346

Closed
olesalscheider opened this issue May 2, 2019 · 7 comments
Closed
Assignees
Labels
comp:gpu GPU related issues type:bug Bug

Comments

@olesalscheider
Copy link
Contributor

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): master from April 22nd
  • Python version: 3.6.7
  • Bazel version (if compiling from source): 0.24
  • GCC/Compiler version (if compiling from source): 7.4
  • CUDA/cuDNN version: 10.0 / 7.5.0
  • GPU model and memory: GTX 1080 Ti

Describe the current behavior
If you use TrtGraphConverterV2 to convert a function in a saved_model to use TRT it does not preserve the output names in the signature_def of the saved model.

If the saved function (decorated with tf.function) returned a dict {'output_a': a, 'output_b': b} the names 'output_a' and 'output_b' are in the saved_model. After conversion with TrtGraphConverterV2 they are changed to the default names 'output_0' and 'output_1'.

Describe the expected behavior
The names of the outputs should not change. This breaks all code that loads the model and relies on the correct names.

Code to reproduce the issue
Take any saved_model that contains a function returning a dict.
Then run this:

conversion_params = trt_convert.DEFAULT_TRT_CONVERSION_PARAMS._replace(precision_mode=trt_convert.TrtPrecisionMode.FP16, max_batch_size=1, max_workspace_size_bytes=8000000000)

trt_converter = trt_convert.TrtGraphConverterV2(input_saved_model_dir='your_saved_model', input_saved_model_signature_key='your_key', conversion_params=conversion_params)
trt_converter.convert()
trt_converter.save('your_saved_model')

Use saved_model_cli to inspect the saved_model.

@muddham muddham self-assigned this May 3, 2019
@muddham muddham added the comp:model Model related issues label May 3, 2019
@muddham
Copy link

muddham commented May 3, 2019

@olesalscheider In order to expedite the trouble-shooting process, please provide a code snippet to reproduce the issue reported here. Thanks!

@muddham muddham added the stat:awaiting response Status - Awaiting response from author label May 3, 2019
@olesalscheider
Copy link
Contributor Author

You can use this code to reproduce the issue:

https://gist.githubusercontent.com/olesalscheider/366f33115016ac9d5f2976ec17124496/raw/f5b68bf571f325742c1bc24658f0de04b3d3b33c/wrong_outputs.py

The output names should be the same before and after conversion but they are not.

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label May 4, 2019
@muddham muddham added the type:bug Bug label May 6, 2019
@muddham
Copy link

muddham commented May 6, 2019

@olesalscheider Able to reproduce the issue.

Our saved model has the following structured outputs:
{'output_a': TensorSpec(shape=(), dtype=tf.float32, name='output_a'), 'output_b': TensorSpec(shape=(), dtype=tf.float32, name='output_b')}
Running TF-TRT conversion...
Our converted model has the following structured outputs:
{'output_0': TensorSpec(shape=(), dtype=tf.float32, name='output_0'), 'output_1': TensorSpec(shape=(), dtype=tf.float32, name='output_1')}

@muddham muddham assigned jvishnuvardhan and unassigned muddham May 6, 2019
@jvishnuvardhan jvishnuvardhan added comp:gpu GPU related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower and removed comp:model Model related issues labels Jul 16, 2019
@aaroey
Copy link
Member

aaroey commented Jul 25, 2019

Thanks for reporting this. I can reproduce the problem, will make a fix soon.

@tensorflowbutler tensorflowbutler removed the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 26, 2019
@tensorflow-bot
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@bitterengsci
Copy link

Is it actually fixed??????

@aaroey
Copy link
Member

aaroey commented Nov 9, 2020

@bixia1 do you know if this is still a problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:gpu GPU related issues type:bug Bug
Projects
None yet
Development

No branches or pull requests

6 participants