Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow Serving as the Model microservice #226

Closed
dtrawins opened this issue Sep 20, 2018 · 1 comment · Fixed by #234
Closed

TensorFlow Serving as the Model microservice #226

dtrawins opened this issue Sep 20, 2018 · 1 comment · Fixed by #234

Comments

@dtrawins
Copy link
Contributor

Are there some plans to enabled predict execution in Seldon using TensorFlow Serving docker image?
I didn't find it mentioned in the examples, documentation and current github issues.
Is it perhaps on the roadmap or do you see it feasible to use TF Serving API on the Model Seldon component?

@ukclivecox
Copy link
Contributor

Yes, we have work in-progress to allow proxy models that call out to Tensorflow Serving and Nvidia TensorRT Inference Servers. This will allow users to construct inference graphs including Multi-Armed Bandits and other complex components where the models may be wrapped as Seldon containers using any language or call out to other model serving technologies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants