Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow namespace kubeconfig secret is written to configurable. #82

Closed
GrahamDumpleton opened this issue Jul 11, 2021 · 4 comments
Closed

Comments

@GrahamDumpleton
Copy link

GrahamDumpleton commented Jul 11, 2021

At present it looks like when using --out-kube-config-secret that the secret lands in the current namespace if --target-namespace is not used, and in the target namespace when --target-namespace is used.

For use case was trying to implement the latter caused problems as there was a deployment that needed to mount the secret but it was being created back in the same namespace where the vcluster control plain was running. So having the secret be created in the target namespace meant that couldn't work.

Things would be more flexible if the namespace the secret for the kubeconfig is created in can be specified, overriding the defaults. This should allow any namespace (expecting that RBAC would be set up by user deploying vcluster to allow creation if a distinct namespace) to be used, giving easy access through mounting of the secret to any in cluster application no matter where it lives.

@FabianKramm
Copy link
Member

FabianKramm commented Jul 12, 2021

@GrahamDumpleton thanks for creating this issue! Yeah the main problem with this as you have mentioned is the RBAC the user would need to set up themselves. To be honest, currently the more preferred way is to just use:

kubectl exec my-vcluster-0 -n my-vcluster -c syncer -- cat /root/.kube/config

Which basically allows you to read the kube config from any remote location that has access to the vcluster pod, which is probably the better pattern than giving vcluster access to other namespaces in my opinion.

Is that an option for you?

@GrahamDumpleton
Copy link
Author

Relying on kubectl exec or vcluster connect means this action needs to be embedded in an init container or the main application container, whereas the configmap (when combined with #83 and #84) means you can have a totally declarative way of setting up a deployment which needs access to the virtual cluster. That is, one set of configuration resources that deploys the virtual cluster, and then creates a deployment for a separate application (eg interactive terminal/IDE environment) which mounts the configmap. The deployment for the latter will be stalled until the configmap is created by the virtual cluster and will then proceed. The deployment includes where the configmap is mounted and an environment variable on the deployment can set KUBECONFIG to the location.

This can be done now so long as everything in the same namespace, but when you want to separate the namespace where the application needing the configmap runs, from the target namespace for the virtual cluster (where configmap would currently be created) you can't without being able to say where the configmap should be created.

In the bigger picture setting up RBAC is a minor thing to do and in our case the system as a whole already involves much more complicated RBAC since giving each user their own vcluster and/or namespace to work in for what they do.

What complicates things is needing to create special init containers just to connect and wait on the vcluster to get the config where as relying on fact that a deployment will wait pending the configmap is so much easier.

@FabianKramm
Copy link
Member

@GrahamDumpleton thanks for the detailed explanation. I'm convinced, I think we can add a flag for this.

@matskiv
Copy link
Contributor

matskiv commented Nov 2, 2022

This was implemented in #85 but we forgot to close the issue, so closing it now :)

@matskiv matskiv closed this as completed Nov 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants