-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorFlow C++ API is significantly slower than Python API in inference #22852
Comments
Is this after ignoring the first few inferences? They are usually very time-consuming. |
Yes, performance was measured after warming up. |
Could be model/hparam specific, for my model I see equivalent inference times with C++ API and Python |
Does your model run on an ARM processor? As I understand, the Python API also calls the same machine code with the C++ API. So it makes sense that the inference time is similar. |
Ah! Missed that part, my model runs on GPU. |
If you are referring to the TensorFlow Object Detection API, can you file an issue in that repo instead? |
@RothLuo Can you pls answer ? |
It has been 14 days with no activity and the |
Closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks! |
System information
***Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
No
***OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Linux Kernel 4.4.103, LUbuntu 16.04
***TensorFlow installed from (source or binary):
Python API is the official release; C++ API was complied via Makefile with all available optimization flags (linked as a static library)
***TensorFlow version (use command below):
1.10.1
***Python version:
2.7
***CUDA/cuDNN version:
Only CPU no GPU
***Bazel version:
did not use Bazel to build
***GPU model and memory:
did not use GPU for inference
***Mobile device:
platform with a RK3399 ARM processor
Steps to reproduce:
Any thoughts would be appreciated!
The text was updated successfully, but these errors were encountered: