Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 remove --all flag on command k3d get kubeconfig #268

Conversation

RouxAntoine
Copy link
Contributor

@RouxAntoine RouxAntoine commented Jun 4, 2020

Indeed to be uniform with k3d get cluster the k3d get kubeconfig command should return kubeconfig to all cluster if no arguments are specify. --all flag is no longer required

Indeed to be uniform with `k3d get cluster` the get kubeconfig command return kubeconfig to all cluster if no arguments are specify
@iwilltry42
Copy link
Member

Hi @RouxAntoine , I saw your comment on the other PR too late...
I'm not sure if this is a good idea to be honest, as get kubeconfig now updates the existing kubeconfig by default instead of creating a new file (as v1.x) did.
So as a result, if you forget to type the cluster name, the command would (with this change) fill up your default kubeconfig with the details of all the k3d clusters that you have running, which you may not want.
I think, that we should keep it as it is: the least intrusive. What do you think?

@RouxAntoine
Copy link
Contributor Author

RouxAntoine commented Jun 4, 2020

Yes indeed i don't think at this use case. Not a problem for me, I close this. ☺

@RouxAntoine RouxAntoine closed this Jun 4, 2020
@RouxAntoine
Copy link
Contributor Author

RouxAntoine commented Jun 4, 2020

@iwilltry42 I continue to think about that. Are there strong prerequisites to update kubeconfig file by default ?
I mean maybe default action could be to write to stdout and with some option for example -o kubeconfig (like currently with custom path) the file would be edited.
Indeed the action to write into kubeconfig file should be intentionally requested by the user.

@iwilltry42
Copy link
Member

Let's just discuss this 👍
What are your arguments for writing to stdout by default?
Updating the default kubeconfig (i.e. the one that the user uses at the moment) was a highly requested feature since 1.x and I think it's super convenient.
So I made it optional to write the kubeconfig to a different path, which always requires this path to exist and requires the user to afterwards set the KUBECONFIG environment variable to the new path, thus "messing" with the environment.

@RouxAntoine
Copy link
Contributor Author

RouxAntoine commented Jun 5, 2020

Thank for you attention about my remark 😊
My argument for stdout are :

)

  • no side effect on user file : like k3d completion command
  • one command one action : k3d get kubeconfig return just kubeconfig, for write it, maybe another command which take kubeconfig on stdin and merge with current KUBECONFIG file (i think this feature will be appropriate into this https://github.com/ahmetb/kubectx).

But maybe i am wrong.
ps : On my use case my kubeconfig is at form .kube/domain1.kubeconfig:.kube/domain2.kubeconfig and the case is correctly catch (good point 👌 ).

But if many feedback ask for it why not 👍

@iwilltry42
Copy link
Member

Your arguments make total sense 👍

I still think that most users (at least according to the feedback I got so far), want to have k3d managing the default kubeconfig, without having to provide any additional flags (For myself, I honestly don't mind too much).

For all the other more "advanced" users, there are flags to redirect the output (either to a new/existing kubeconfig file) or to stdout (k3d get kubeconfig mycluster -o -).

I don't think that you're wrong and am sure that your points make sense for several users, but unfortunately, we probably won't get much more feedback on this to shift the current decision (which is based on old v1.x feedback).

P.S.: Thanks for catching that bug in the kubeconfig.go.. I just fixed in commit bdb5a1f 😁

@RouxAntoine RouxAntoine deleted the fix/uniformize-getkubeconfig-flags branch June 10, 2020 17:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants