-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support adding local cluster to make k8s deployment work out of the box #1203
Comments
After a discussion with the internal team, we are not going to support this at present. The point of view is as follows,
|
Thanks for taking this seriously. My point is that the user will get distracted very easily when deploying a new k8s. The public cloud is not the first choice in many case(mostly because of the cost). And like you said, people can deploy their own cluster by them selves, but also they will lost focus when finding out how to do it. In quick start docs, you can add k8s cluster install link to guide user to do so. |
The current version is not yet roubust enough to allow users to arbitrarily modify walrus internal k8s resources. And it is not certain that future versions will still use k8s as the basis for our orchestration. @orangedeng |
Yeah, i get that. I also don't think the embed k3s is a good idea to run terraform job. So adding the k3d/k3s quick start link to the docs should be enough. |
We've revisited the issue and plan to support it in the next release. |
Environment
Test Result: Docker:
|
Is your enhancement related to a problem? Please describe.
Describe the solution you'd like
Add a flag to support adding local k3s to the default project, which makes it easier to try out.
The text was updated successfully, but these errors were encountered: