-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for deleting contexts: kubectx -d NAME #23
Comments
@pswenson @thomaswo @nealf @laszlocph @cyakimov @gtseres Sorry to ping y'all, you upvoted this feature request so I have a question for you. Would you like to have
Let me know which one is more preferable to you, and please explain why. There's an option to provide both |
I would love to see this. I to have many (> 20) entries in my kubeconfig and I wish I could delete some of them without editing my .kube/config.
How about the following workflow:
|
@jfchevrette I think we're both on the same page because we're both using GKE. GKE has a 1:1:1 mapping. So we almost always will want to use I don't intend this tool to "manage" contexts, it's meant to do some convenience tasks, primarily "switching" contexts (renaming and deleting happens to be the other frequent tasks in my GKE usage). Therefore, I think there should be only one delete command, say, (I probably won't add the confirmation prompt, I assume users know what they're doing and if their kubeconfig entries are valuable, they shouldn't be using |
How about this: we can have only one delete mode, that deletes the corresponding "user"/"cluster" entry only if it appears once in the So if a "user"/"cluster" is used on multiple clusters, it can remain, and can be deleted when the last context entry uses them is deleted. This would still accommodate GKE, while not messing with user's stuff if user is reusing the "user"/"cluster". But more I think about it, if people don't have 1:1:1 mapping in their kubeconfig, they shouldn't use this feature. |
@ahmetb +1 I like your proposal a lot. I actually manage OpenShift clusters rather than GKE clusters. It shouldn't matter much though. I also cannot think of a reason a kubeconfig wouldn't have a 1:1:1 mapping unless it was modified through other means. I'm still unsure if there is a clean way to verify the 1:1:1 mapping with pure bash or if kubectx would have to depend on an external tool. |
We can easily can do a Furthermore kubectl lets us to delete "cluster" entries, but I currently don't see a way to delete "user" entries. So I filed kubernetes/kubectl#396. In the meanwhile, I'll try |
So I prototyped some stuff at #38, feel free to give it a try. It only does a
While I hate unused/leaked stuff in kubeconfig files, since I use GKE, I know that I can delete the entire I think this will work fine, and I intend to merge the |
LGTM! After reviewing the changes in #38 I updated my local copy and I deleted up a bunch of contexts that I wasn't using any more. Some with odd naming/characters and everything went as expected, no errors. The only sort of issue comes from the fact |
@jfchevrette I think we can improve that later, and at least For v0.6.0 we can consider:
|
I'm really sad about all the context entries that accrued in my global KUBECONFIG file after using GKE for a while.
I think we can have a shorthand
kubectx -d foo
that removes thatcontext
, and the associatedcluster
anduser
entries in the KUBECONFIG file.Currently
kubectl config delete-context
doesn't offer a cascading deletion of the associateduser
andcluster
entries, but I think this command should assume people don't edit kubeconfig file manually, or if they do, they wouldn't usekubectx -d
. This also assumes there’s 1:1:1 mapping between "contexts", "users", "clusters" entries.The text was updated successfully, but these errors were encountered: