Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: CLI UX Improvements #850

Closed
BenTheElder opened this issue Sep 11, 2019 · 18 comments
Closed

RFC: CLI UX Improvements #850

BenTheElder opened this issue Sep 11, 2019 · 18 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@BenTheElder
Copy link
Member

Recently we landed some changes to revamp the logging / debugging experience and to ensure that status messages can be suppressed (-q / --quiet). However, I think kind still has plenty to improve on this front.

In particular I propose the following changes to improve the usability of kind, some of which may be slightly breaking:

Drop export KUBECONFIG=$(kind get kubeconfig-path)

The kind create cluster command should accept a --kubeconfig flag and the environment variable KUBECONFIG to determine which file to use in for the export cluster kubeconfig. We should merge the generated cluster config into this file and set the default context to point to the newly created cluster.

Existing Feature Requests

PROS:

  • Users are already used to this with kops, minikube, gcloud (GKE), ...
  • Less effort to use (kind create cluster is now the entire creation flow)
  • Easier to switch shells.
  • Works out of the box on every OS.

CONS:

  • Need to be more explicit for multicluster testing. HOWEVER if you set KUBECONFIG before kind create cluster
  • Slight complexity increase in kind
  • client-go direct dependency.
    • We already have an indirect dependency via kustomize
    • I think this can be mitigated by documenting replace using in go.mod for library consumers, and avoiding depending on newer client-go features. Worst case we re-implement or fork the bits we need
  • This is a breaking change
    • This will be simple to update though:
      • Just export KUBECONFIG with your preferred path before calling kind create cluster instead of after. If you mimic the old path behavior this will work with old and new kind versions. Some users already hardcoded these paths in various places.
      • we can leave kind get kubeconfig-path for a migration period but accept the same inputs and point to the path we'd choose in kind create cluster
    • v0.6 will already more or less break the CLI slightly to implement the revamped logging

Support KIND_NAME environment variable

We should allow re-using entire kind scripts without the --name flag for each command by also accepting a KIND_NAME environment variable to specify the cluster name if the --name flag is not specified.

Existing Feature Requests

PROS:

  • Very simple to implement and maintain
  • Establishes a pattern of KIND_* environment variables for environment options as opposed to cluster configuration.
    • In the future we can do things like KIND_EXPERIMENTAL_NODE_BACKEND=podman

CONS:

  • Technically more complexity, if not much
  • Possible namespace collision? Seems unlikely
@BenTheElder BenTheElder added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 11, 2019
@BenTheElder BenTheElder added this to the v0.6.0 milestone Sep 11, 2019
@BenTheElder BenTheElder self-assigned this Sep 11, 2019
@BenTheElder BenTheElder added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Sep 11, 2019
@BenTheElder
Copy link
Member Author

Most of these changes are actually pretty simple to implement, pending feedback I hope we can land these for v0.6.0 and go ahead and land all of the CLI usage tweaks this release rather than repeatedly break it.

@mauilion
Copy link

I am very much *1 on KIND_NAME idea I think maybe KIND_CLUSTER_NAME keeps us from conflicting with image name in the future.

A word of caution on the Drop Export idea we are going to constantly have conflicts on naming as folks delete old kind clusters and create new ones. The conflicts are across all three structs in the kubeconfig. User, cluster and context. Given that we may want to determine some "correct" behaviour for overwriting (like backing up the original to .kube/config.orig)

What's the behavior if the cluster is deleted?
Do we randomly select the next context to default?
If I create multiple kind clusters can I just switch between the contexts for them?

Overall I am in favor!

@BenTheElder
Copy link
Member Author

I am very much *1 on KIND_NAME idea I think maybe KIND_CLUSTER_NAME keeps us from conflicting with image name in the future.

I thought about this, but I think it should match the flag for both, so it would be KIND_IMAGE for --image if we choose to support this in the future. It's also shorter :^)

A word of caution on the Drop Export idea we are going to constantly have conflicts on naming as folks delete old kind clusters and create new ones. The conflicts are across all three structs in the kubeconfig. User, cluster and context. Given that we may want to determine some "correct" behaviour for overwriting (like backing up the original to .kube/config.orig)

We should ensure that kind clusters only conflict with kind clusters with the same --name, we can patch the entries in kuebconfig to be keyed with strings that are specific to kind. EG my ~/.kube/config with some GKE clusters (for prow.k8s.io) has contents like:

contexts:
- context:
    cluster: gke_gob-prow_us-west1-a_prow

<snipped details>

users:
- name: gke_gob-prow_us-west1-a_prow
  user:

<snipped details>

s/gke/kind/ and use the same idea I think we're good there. See also #112

What's the behavior if the cluster is deleted?

We should record the file we wrote to in the node labels maybe? Alternatively we just check the current KUBECONFIG per client-go following the same rules as create.

For the file we locate, we then drop the keys that match that kind cluster.

If there's no keys left we may delete the file. It is probably safest to just drop those keys though and I would lean that way initially unless persuaded otherwise (possibly prior art from other tools?)

Do we randomly select the next context to default?

This is a good question, we should inspect what the other common cluster provisioners do here!
They must have some solution? I admit this one I hadn't thought about much yet 🙃

If I create multiple kind clusters can I just switch between the contexts for them?

I don't see why not, as long as we do something about those keys (see above) ?

@neolit123
Copy link
Member

i have similar concerns to @mauilion about the export change.

if KUBECONFIG points to a non-kind cluster, kind create cluster shouldn't probably merge it's new KUBECONFIG into that?

how will the merge play with the fact that KUBECONFIG supports a list of paths. should the merge only happen in the first path?

if KUBECONFIG does not point to kind config or if empty, should a message about export KUBECONFIG=$(kind get kubeconfig-path) be printed? in future versions this message can removed; can this be the graceful transition for the deprecation of kind get kubeconfig-path?

@mauilion
Copy link

mauilion commented Sep 11, 2019

Good point on env var naming KIND_IMAGE Is better :)

I'm okay with merging the two. There is another possibility that we could document for users worried about merge:

The KUBECONFIG env bar can take multiple paths and they will be merged (with the first kubeconfig "winning" and holding the active context selection. This first kubeconfig is also the only writeable one. The rest are readonly. If the user sets the var to

export KUBECONFIG=/path/to/kind/kubeconfig:~/.kube/config

Then we'd populate the kind kubeconfig but they'd still be able to switch context to the original config. If KUBECONFIG gets unset then the original unmodified is the active config.

I think we can document this behavior effectively.

@BenTheElder
Copy link
Member Author

BenTheElder commented Sep 11, 2019

EDIT: ninjaed by @mauilion! 😅

if KUBECONFIG points to a non-kind cluster, kind create cluster shouldn't probably merge it's new KUBECONFIG into that?

Why not? The other tools do. KUBECONFIG is designed to support multiple clusters.

how will the merge play with the fact that KUBECONFIG supports a list of paths. should the merge only happen in the first path?

Yes it should write to the first entry in the list.
Related: kubernetes/kops#1000 kubernetes/kops#1050

if KUBECONFIG does not point to kind config or if empty, should a message about export KUBECONFIG=$(kind get kubeconfig-path) be printed?

We need to print a message about the config being exported but the emptiness doesn't seem relevant?

We may create the file if it doesn't exist, and client-go specifies the default location if not set. Kind already looks up and writes to the default directory.

In future versions this message can removed; can this be the graceful transition for the deprecation of kind get kubeconfig-path?

We should tell the user that we wrote the KUBECONFIG and to use kubectl or link to some usage docs still, but instead of with the command we say something roughly like: Exported KUBECONFIG to /home/neolit123/.kube/config and set context to kind_cluster_kind and You can now use your cluster with 'kubectl cluster-info'.

We could keep the config path command accurately if we record the exported path to in cluster state. We probably don't need to though long-term.

@neolit123
Copy link
Member

Why not? The other tools do. KUBECONFIG is designed to support multiple clusters.

during testing various things, my use case (which is certainly not that important), is that i create kubeadm node/cluster on the host and point KUBECONFIG to that. then from time to time i also create a kind cluster on the same host.

i'd consider the merge of the kind and kubeadm configs in such a case as undesired.
it will force me to first unset the KUBECONFIG.

a similar problem can happen if the user has some KUBECONFIG on their host that controls a production cluster and then they decide to test something with kind. this will inject kind cluster data in the same production KUBECONFIG.

We need to print a message about the config being exported but the emptiness doesn't seem relevant?
We may create the file if it doesn't exist, and client-go specifies the default location if not set. Kind already looks up and writes to the default directory.

what if /home/user/.kube/config contains a valuable file, but KUBECONFIG is temporary unset and not pointing to it. would kind stomp that file?

We should tell the user that we wrote the KUBECONFIG and to use kubectl or link to some usage docs still, but instead of with the command we say something roughly like: Exported KUBECONFIG to /home/neolit123/.kube/config and set context to kind_cluster_kind and You can now use your cluster with 'kubectl cluster-info'.

or perhaps Wrote kubeconfig to /home/user/.kube/config, because the kind process cannot technically export parent level env. vars?

@BenTheElder
Copy link
Member Author

i'd consider the merge of the kind and kubeadm configs in such a case as undesired.
it will force me to first unset the KUBECONFIG.

You can tell kind which file to use both by KUBECONFIG or --kubeconfig when creating the cluster.

That does not necessitate unsetting. If you want seperate files you either pass --kubeconfig or you modify KUBECONFIG.

You can do as @mauilion pointed out above to continue to have access to the prod cluster in the same shell but write kind to another file by specifying it as kind-config:existing-config.

As noted above, many other tools do this including minikube.

a similar problem can happen if the user has some KUBECONFIG on their host that controls a production cluster and then they decide to test something with kind. this will inject kind cluster data in the same production KUBECONFIG.

A) The production config needs to be recoverable anyhow. (Not that we would break it, but still!)
B) The production cluster should still work, you may need to switch contexts. This isn't new to users.

Again, this is not really kind specific. Users already expect this from other tools. Multi-file config management seems less common than context management.

what if /home/user/.kube/config contains a valuable file, but KUBECONFIG is temporary unset and not pointing to it. would kind stomp that file?

Nothing is being "stomped". Whatever file is selected we'll merge with. If the file doesn't exist we'll create it.

or perhaps Wrote kubeconfig to /home/user/.kube/config, because the kind process cannot technically export parent level env. vars?

Right. We should try to follow prior art in this message as well. I'm not sure what people call these files other than the environment variable name. Will also cross reference the Kubernetes docs.

@neolit123
Copy link
Member

Nothing is being "stomped". Whatever file is selected we'll merge with. If the file doesn't exist we'll create it.

bad wording.

Again, this is not really kind specific. Users already expect this from other tools. Multi-file config management seems less common than context management.

no strong objections on my side, but would be interesting to get more feedback from kind users.

@Ilyes512
Copy link

Ilyes512 commented Sep 15, 2019

Looks good... as long as setting KUBECONFIG or --kubeconfig creates a new file if it doesn't exists (like you mentioned). 👍

edit:

How are you going to implement the --internal flag. I really need that ie kind get kubeconfig --internal so I can control the cluster inside a container.

@BenTheElder
Copy link
Member Author

/assign
/lifecycle active

@BenTheElder
Copy link
Member Author

This is basically done without a client-go dependency, it's a little bit more code to get all the nice things, but not too substantial, and the behavior and binary size are much nicer than before 🙃
Along the way, also modifying kubeconfig without regex.

Will PR tomorrow, need to do some minor cleanup.

@BenTheElder
Copy link
Member Author

dug deep into expected behavior and replicated it #1029 (comment)

client-go is not necessary for this, however we have closely mimiced how it does "locking" and the documented expectations from kubectl regarding selecting kubeconfig, which are both relatively simple.

the context set / unset behavior seems to be pretty universally the same so we've mimiced that as well.

will look into the env in a future PR, it looks like the pattern from kops in this regard is KOPS_CLUSTER_NAME so KIND_CLUSTER_NAME would be reasonable, there doesn't seem to be a real "standard" for this one understandably.

@BenTheElder
Copy link
Member Author

#1061 brings back the kubeconfig-path command with a detailed (stderr) warning

@Ilyes512
Copy link

Ilyes512 commented Nov 6, 2019

@BenTheElder Will it be still possible to get a config with the internal (docker) ip's? (kind get kubeconfig --internal)

@BenTheElder
Copy link
Member Author

@Ilyes512 yes, in fact kind get kubeconfig is more robust now, it only depends on a control plane node existing, it does not depend on the kubeconfig file on the host.

@BenTheElder
Copy link
Member Author

You can also now kind export kubeconfig which is a pretty natural extension of this to support re-exporting the kubeconfig AKA re-do the export that kind create cluster did if you lost your credentials or want to export to a second path.

@BenTheElder
Copy link
Member Author

will track KIND_CLUSTER_NAME in another issue for v0.7.0 #1094

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

5 participants