-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very slow performance on extracting grpc request results using .float_val #1725
Comments
@denisb411, |
@denisb411, |
@rmothukuru Sorry for delay on response. The code I used (with fast results): I don't know how to proceed as my custom training follows almost the same pipeline.config as the original so, there's nothing different on the training process. |
@rmothukuru @yimingz-a any updates on this? |
The performance issue of Since the issue doesn't persist with pre-trained weights, I would suggest you to compare the PredictResponse object from your trained model and with the model with pre-trained weights. If the issue persists, please create a new question on StackOverflow with the tags "tensorflow" and "object-detection". Thank you! |
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you. |
This issue was closed due to lack of activity after being marked stale for past 7 days. |
For some reason the time used to extract results using .float_val is extremely high.
Scenario example along with its output:
Tensorflow Serving is running an object detection model from tensorflow's object detection api (faster_rncc_resnet101). As we can see, the extraction of the boxes found on detection is higher than the prediction itself.
The current shape of the detected boxes is [batch_size, 100, 4], with 100 being the number of max detections.
As a workaround I can low the number of max detection and decrease significantly the necessary time to extract these values, but it keeps staying unnecessary (on my point of view) high.
I'm using tensorflow-serving 2.3.0-gpu as a docker container along with tensorflow-serving-api==2.3.0
The text was updated successfully, but these errors were encountered: