New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add non-batching model support to grpc_image_client.py #126
Conversation
densenet_onnx:
vs image_client.py, as reference:
|
inception_graphdef:
vs image_client.py, as reference:
|
metadata_response, config_response.config) | ||
|
||
supports_batching = max_batch_size > 0 | ||
if not supports_batching and FLAGS.batch_size != 1: | ||
print("ERROR: This model doesn't support batching.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise Exception instead of print + exit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
metadata_response, config_response.config) | ||
|
||
supports_batching = max_batch_size > 0 | ||
if not supports_batching and FLAGS.batch_size != 1: | ||
raise Exception("ERROR: This model doesn't support batching.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need "ERROR:"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
Fix grpc_image_client.py to support models with/without batching, and make sure it works with the
densenet_onnx
example that does not have batching (vsinception_graphdef
).The PR extends fixes from https://github.com/triton-inference-server/client/pull/103/files