Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NVIDIA inference server docs need to be updated to not use ksonnet #959

Open
jlewi opened this issue Jul 22, 2019 · 2 comments

Comments

@jlewi
Copy link
Contributor

commented Jul 22, 2019

https://www.kubeflow.org/docs/components/serving/trtinferenceserver/

Docs still use ksonnet.

For serving, I think its simpler than rather than provide kustomize manifests, our docs just contain example YAML specs that people can copy and paste in order to create a deployment.

@pdmack Any interest in picking this up?

@issue-label-bot

This comment has been minimized.

Copy link

commented Jul 22, 2019

Issue Label Bot is not confident enough to auto-label this issue. See dashboard for more details.

@jlewi jlewi added this to New in 0.6.0 via automation Jul 22, 2019

@jlewi jlewi added this to To do in ksonnet-turndown via automation Jul 22, 2019

@sarahmaddox

This comment has been minimized.

Copy link
Collaborator

commented Aug 15, 2019

We've added a comment in the doc, saying that it still uses ksonnet and needs updating:
https://www.kubeflow.org/docs/components/serving/trtinferenceserver/#kubernetes-generation-and-deploy

It'd be good to get this updated before we archive the v0.6 branch of the docs.

@pdmack is this something you could take on?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
2 participants
You can’t perform that action at this time.