You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your demo,
when I run the raw code, it looks work greatly, in order to verify the effect of tensorrt more precise, I add "warm-up" code before infer the graph. eg:
tf_engine = TfEngine(frozen_graph)
for i in range(warm_up):
y_tf = tf_engine.infer(x_test)
t0 = time.time()
y_tf = tf_engine.infer(x_test)
t1 = time.time()
print('Tensorflow time', t1 - t0)
verify(y_tf, y_keras)
tftrt_engine = TftrtEngine(frozen_graph, batch_size, 'FP32')
for i in range(warm_up):
y_tftrt = tftrt_engine.infer(x_test)
t0 = time.time()
y_tftrt = tftrt_engine.infer(x_test)
t1 = time.time()
print('TFTRT time', t1 - t0)
verify(y_tftrt, y_keras)
the test result show that tensorrt doesn't accelerate compared with the tensorflow infer process, I don't know why,
The text was updated successfully, but these errors were encountered:
Thanks for your demo,
when I run the raw code, it looks work greatly, in order to verify the effect of tensorrt more precise, I add "warm-up" code before infer the graph. eg:
the test result show that tensorrt doesn't accelerate compared with the tensorflow infer process, I don't know why,
The text was updated successfully, but these errors were encountered: