Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate using tensorflow serving's built-in http server #896

Closed
lluunn opened this issue May 31, 2018 · 1 comment
Closed

Investigate using tensorflow serving's built-in http server #896

lluunn opened this issue May 31, 2018 · 1 comment

Comments

@lluunn
Copy link
Contributor

lluunn commented May 31, 2018

tensorflow/serving#902

https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/http_server.h

@yupbank
Copy link
Member

yupbank commented Jun 26, 2018

i did tried a bit... i think most of the stuff we provide, c++ version already have, except for they are most strict on the data type and less friendly on the error message. i did even did some bench mark with the same model, the python version would add ~ 10ms overhead .

@lluunn lluunn self-assigned this Jun 28, 2018
surajkota pushed a commit to surajkota/kubeflow that referenced this issue Jun 13, 2022
…beflow#896)

* ISTIO rbac roles need to include the API group networking.istio.io
* Otherwise we won't be able to create virtualservices inside notebooks.
  We want to do this to deploy things like the mnist frontend and tensorboard
  from notebooks.

* Add an option to the regenerate tests script to use an environment
  variable to explicitly set the name of the origin repository.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants