-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for how estimator/graph_transformations/tf-serving work together #1078
Comments
@gautamvasudevan - Hi, any update on this documentation ? |
No. @lamberta any thoughts on this? |
I've seen an internal doc that addresses some of this (b/116674557), but nothing that is ready to publish. |
Looking for this as well. |
@lamberta Anything on this you could share? The ideal scenario would be documentation on how to:
I am able to accomplish many of these steps independently:
but pruning / transformer the output of an estimator, and generating a valid servable from this, would be great. |
The doc I saw became this blog post: Optimizing TensorFlow Models for Serving We're building out the TFX section which will have more pipelines for serving: https://www.tensorflow.org/tfx/ |
Thank you @lamberta. |
@lamberta thanks for posting |
Please note that above guide is for tf 1.x and most of the packages like sessions, freeze_graph is deprecated now. see here |
System information
Describe the Problem
The documentation describing how create and serve a custom Estimator, how to serve a tensorflow model in general, and how to perform graph transforms is very helpful - each on its own - but it is unclear how all these components fit together in the same ecosystem. This google cloud documentation on deploying models seems to suggest this (create -> transform -> serve) is the intent, I just cannot seem to find any documentation on how to:
The text was updated successfully, but these errors were encountered: