-
Notifications
You must be signed in to change notification settings - Fork 40.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple CLI configuration profiles #1755
Comments
@derekwaynecarr who also added some of this w.r.t namespaces. We argued that the namespace choice should infer a master as well and auth. Since namespace is the smallest possible top level grouping (server is broader than namespace) switching namespaces felt right. Also cc @jwforres |
I think https://github.com/spf13/viper can help us with storing and retrieving this config. It also seems like the right subcommand is
And this stores everything in a YAML file in ~/.kubectl:
And you use the cluster config as such:
This would replace our current ~/.kubernetes_auth and ~/.kubernetes_vagrant_auth files. |
This sounds awesome On Tue, Oct 21, 2014 at 7:26 PM, Sam Ghods notifications@github.com wrote:
|
Moving the discussion here from #1941. I agree that specification of lots of command-line flags and/or environment variables is tedious and error-prone, though environment variable initializations could be organized in scripts, for which we could provide templates:
Requirements:
Nice to haves:
|
They can be organized in scripts, and that use case should always work (I can easily script kubectl in a way natural to the shell). However, I feel that pushing the problem to end users to script creates a gap in the client experience. I believe a significant proportion of users will deal with multiple namespaces and kubernetes servers over the lifetime of interfacing with kubectl (20-40%). Transitioning between those namespaces or servers will form a large part of their kubectl interactions. Changing/scripting environment variables and switches rapidly wears on an end user much like reauthenticating via prompt or keyword every time. On the other hand, having disjoint settings (change namespace but not server, or server but not namespace) brings a significant risk of administrative users unintentionally performing destructive actions on the server. So the two extremes to me indicate a need to clearly and unambiguously manage the transition in a predictable fashion, and to couple namespace and server so users are not transitioning one without the other. ----- Original Message -----
|
@smarterclayton I wasn't actually suggesting that we should rely on scripts, but was using the example to motivate the requirements. |
Sorry, didn't mean to imply that you were. Was trying to better articulate our thought process around the pattern.
|
@derekwaynecarr, @jlowdermilk and I (#kubernetesunconference2014) sketched out a design for this. kubectl will look for a .kubeconfig file, first in the current directory, and then in ~/.kube/.kubeconfig (it's in a directory to have an easy default place for certs and other config-related files). You can also specify an env var (KUBECONFIG=/path/to/kubeconfig) or command line param (--kubeconfig=/path/to/kubeconfig) to a file which takes highest precedence. The format looks like this:
It then unmarshals into the following Go struct:
It's designed in a way that the slices can be merged across many files so things like Clusters, Users and Contexts are all composable and additive across .kubeconfig files. You can override anything at the command line.
You can create and manage this file by hand, using tooling, or with the kubectl interface like so:
|
Your create-context args does not include namespace. Should there be a way to simply change aspects of a context, or is a Otherwise looks awesome. On Wed, Dec 3, 2014 at 4:46 PM, Sam Ghods notifications@github.com wrote:
|
@deads2k will tackle this |
Cool. https://github.com/imdario/mergo might be a good solution for merging and overriding config structs from different sources. |
It's not necessarily something that should block progress on this issue, but a nice-to-have along the way for this issue might be the ability to use more than one cluster in the e2e tests. (This issue may be necessary-but-not-sufficient to get there.) It would be nice if (a) we could shard tests across multiple clusters, and/or (b) run tests in the background and still work on other clusters. (a) is particularly useful if you have the resources and are trying to track down a flaky test. |
@jlowdermilk might be interested in this |
I think this is a good starting point. We probably we also want a command like
To remove config entries, but that's obvious, apart from choice of name. I'd also add is that I think we should limit the number of places we read a .kubeconfig file from to no more 2 current directory and a known location (the proposed ~/.kube/.kubeconfig sgtm). An alternative is to search for a .kubeconfig file in each parent directory of the current one as well, but I can't think of use cases that would need that approach. Another thing to consider: the current proposal implies storing cert/auth files in arbitrary locations (config just needs a path). We could also store them in a defined directory hierarchy and have kubectl look for the top level directory. That is, kubectl would look for config first in
(Note the above assumes |
Done |
I should be able to have multiple clusters that I am auth'ed into that I can flip with a symbolic name. Having a single auth for GCE is really painful when flipping between real and e2e clusters. Having to set KUBERNETES_PROVIDER sometimes and KUBERNETES_MASTER other is somethign I easily forget.
Something like (obviously half-baked):
kubectl kube add timsgce https://1.2.3.4
kubectl kube add timse2e https://5.6.7.8
kubectl kube add vagrant
kubectl --kube=timsgce get pods
kubectl --kube=vagrant get pods
kubectl kube set timse2e
kubectl get pods
@ghodss for possible kubectl thinking.
The text was updated successfully, but these errors were encountered: