You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With reference to this comment explaining the difference between TensorflowModel and Model, I can not find the implementation of the server side code for TensorflowModel (the github links given are broken).
I see that tf_container/serve.py was removed in the following change: aws/sagemaker-tensorflow-training-toolkit@12fd7ef
And I can not find where the latest container-side implementation lies in github.
Similarly it'll be useful to be able to check out container-side implementation corresponding to older container/sagemaker/serve.py.
The reason I'm asking is:
I want to be able to export a model with build_parsing_serving_input_receiver_fn (which expects a serialized tf.Example while serving) and this doesn't work well with TF REST api as it's not possible to put raw bytes into JSON payload. So, it works with TensorflowModel and not with Model (example). But I don't understand why we can not use py3 with TensorflowModel as TensorFlow serving api library supports both py2 & py3.
Ideally I would like to do something like what this example does, sending a serialized gRPC request (e.g. PredictionRequest proto) directly from client, but can not figure out how to do that cleanly (and hopefully without involving py2).
The text was updated successfully, but these errors were encountered:
With reference to this comment explaining the difference between
TensorflowModel
andModel
, I can not find the implementation of the server side code forTensorflowModel
(the github links given are broken).I see that
tf_container/serve.py
was removed in the following change:aws/sagemaker-tensorflow-training-toolkit@12fd7ef
And I can not find where the latest container-side implementation lies in github.
Similarly it'll be useful to be able to check out container-side implementation corresponding to older
container/sagemaker/serve.py
.The reason I'm asking is:
I want to be able to export a model with
build_parsing_serving_input_receiver_fn
(which expects a serialized tf.Example while serving) and this doesn't work well with TF REST api as it's not possible to put raw bytes into JSON payload. So, it works withTensorflowModel
and not withModel
(example). But I don't understand why we can not use py3 withTensorflowModel
as TensorFlow serving api library supports both py2 & py3.Ideally I would like to do something like what this example does, sending a serialized gRPC request (e.g. PredictionRequest proto) directly from client, but can not figure out how to do that cleanly (and hopefully without involving py2).
The text was updated successfully, but these errors were encountered: