Skip to content
This repository was archived by the owner on Feb 3, 2025. It is now read-only.
This repository was archived by the owner on Feb 3, 2025. It is now read-only.

Failed to build TensorRTengine #148

@MachineJeff

Description

@MachineJeff

Hi,I have used the following code to transform my saved model with TensorRT:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverter(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

after finished it, I used the following code to inference:

with tf.Session() as sess:
# First load the SavedModel into the session
tf.saved_model.loader.load(
sess, [tf.saved_model.tag_constants.SERVING], output_saved_model_dir)
output = sess.run([output_tensor], feed_dict={input_tensor: input_data})

Although I get the right output, I received the following warning message:

2019-11-01 19:11:24.404448: W tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:647] Engine creation for TRTEngineOp_22 failed. The native segment will be used instead. Reason: Internal: Failed to build TensorRT engine
2019-11-01 19:11:24.406408: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:632] Building a new TensorRT engine for TRTEngineOp_21 input shapes: [[1,128], [1,128]]
2019-11-01 19:11:24.406463: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:632] Building a new TensorRT engine for TRTEngineOp_23 input shapes: [[1,128], [1,128]]
2019-11-01 19:11:24.408900: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 2) [Fully Connected]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408921: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 5) [Scale]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408930: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for model/inference/post_cbhg/bidirectional_rnn/bw/bw/while/gru_cell/BiasAdd_1. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408938: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 1) [Shuffle]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408946: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 4) [Shuffle]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).

And the cost time is the same as the origin model. I wonder if TensorRT really worked?

Besides, I also tried FP32 FP16 model, the cost time do not changed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions