Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubectl config merge command #46381

Closed
henriquetruta opened this issue May 24, 2017 · 37 comments
Closed

Add kubectl config merge command #46381

henriquetruta opened this issue May 24, 2017 · 37 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@henriquetruta
Copy link
Contributor

gcloud/GKE has an efficient way of getting credentials from somewhere by using the get-credentials command that automatically merges the content of a remote cluster in a local kubeconfig file. We want to have something similar in k8s. A first approach would be having two kubeconfig files locally and being able to merge the content of one into the other without needing to do manual copy/paste.

We could have a command that looks like:
kubectl config merge <source> <target>

Where the target file defaults to ~/.kube/config and source would be some kubeconfig file that the user wants to have its entries (users, contexts and clusters) copied to the target file.

If there is no name conflict, i.e., the entries in source don’t exist in target, the output is quite straightforward. However, if there is some conflict, we need to decide on how to approach. Do anything and inform the user? Rename them in target? Copy only those with no conflict? Have this options as parameters (--ignore, --rename, --abort-if-conflict, etc)
I suggest having for now, for the sake of simplicity doing anything and informing the user. The others can me made in follow-up issues

@henriquetruta
Copy link
Contributor Author

@nikhiljindal @madhusudancs can you take a look?

@superbrothers
Copy link
Member

Can kubectl config view meet your requirements?

# use multiple kubeconfig files at the same time and view merged config
$ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

@madhusudancs
Copy link
Contributor

kubectl config view might not work as is as it redacts certificate data. kubectl config view --flatten might, but I have never tried that with multiple kubeconfigs.

cc @kubernetes/sig-cli-feature-requests

@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label May 25, 2017
@shiywang
Copy link
Contributor

shiywang commented May 25, 2017

@madhusudancs @superbrothers @henriquetruta
KUBECONFIG=~/.kube/config:~/config kubectl config view --flatten works, but I think adding this kubectl config merge is also ok, I can implement this since I'm working on another kubectl config subcommand
cc @kubernetes/sig-cli-feature-requests

@madhusudancs
Copy link
Contributor

@shiywang it is up to SIG-CLI to decide.

@shiywang
Copy link
Contributor

@madhusudancs sure, we'll wait for @kubernetes/sig-cli-maintainers to decide.

@henriquetruta
Copy link
Contributor Author

@shiywang thanks! I've already started to implement it here

@bgrant0607 bgrant0607 added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Jun 21, 2017
@bgrant0607
Copy link
Member

See also #9298, #10693, #20605, #30395

@caesarxuchao
Copy link
Member

/sub

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2018
@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2018
@PaulCharlton
Copy link

/remove-lifecycle stale

@epcim
Copy link

epcim commented Mar 16, 2018

@superbrothers commented on May 25, 2017, 7:27 AM GMT+2:

Can kubectl config view meet your requirements?

# use multiple kubeconfig files at the same time and view merged config
$ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

works, except the fact your yaml config starts with --- 👎

@HighwayofLife
Copy link

@epcim

KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view --flatten > ~/.kube/config

Does not have a yaml config starting with ---, however, it still works even when yaml starts with that separator.

@anthonydahanne
Copy link

KUBECONFIG=/.kube/config:/.kube/kubconfig2 kubectl config view --flatten > ~/.kube/config

well, that almost worked for me, except that... only partial content was copied ! (writing while reading seemed to have messed up the final output !)
I had better luck with a temporary file :

KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view --flatten > mergedkub && mv mergedkub ~/.kube/config

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 8, 2018
@anthonydahanne
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 7, 2018
@remyleone
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 7, 2018
@omerfsen
Copy link

There is a problem that KUBECONFIG does not understands special chars like ~ (home dir path) so use full path. for example:

KUBECONFIG=/home/serra/.kube/config:/home/serra/.kube/config.baremetal kubectl config view --flatten > /home/serra/.kube/config

@liggitt
Copy link
Member

liggitt commented Dec 18, 2018

There is a problem that KUBECONFIG does not understands special chars like ~ (home dir path) so use full path. for example:

KUBECONFIG=/home/serra/.kube/config:/home/serra/.kube/config.baremetal kubectl config view --flatten > /home/serra/.kube/config

Most shells expand that for you:

$ more ~/k1
apiVersion: v1
clusters:
- cluster:
    server: https://1.1.1.1
  name: cluster1
contexts:
- context:
    cluster: cluster1
    user: user1
  name: context1
current-context: context1
kind: Config
preferences: {}
users:
- name: user1

$ more ~/k2
apiVersion: v1
clusters:
- cluster:
    server: https://2.2.2.2
  name: cluster2
contexts:
- context:
    cluster: cluster2
    user: user2
  name: context2
current-context: context2
kind: Config
preferences: {}
users:
- name: user2

$ KUBECONFIG=~/k1:~/k2 kubectl config view
apiVersion: v1
clusters:
- cluster:
    server: https://1.1.1.1
  name: cluster1
- cluster:
    server: https://2.2.2.2
  name: cluster2
contexts:
- context:
    cluster: cluster1
    user: user1
  name: context1
- context:
    cluster: cluster2
    user: user2
  name: context2
current-context: context1
kind: Config
preferences: {}
users:
- name: user1
  user: {}
- name: user2
  user: {}

@MichaelSp
Copy link

My little bash script based on #46381 (comment)

# ~/.bashrc

function kmerge() {
  KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}

@drewwells
Copy link

kubectl config merge would be easier to remember. First class support would be nice even though the workaround is clever

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 4, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 3, 2019
@eroji
Copy link

eroji commented Aug 31, 2019

This does not seem to work if you have the same user but different token for the user for different clusters. The kubectl config view --flatten will only keep the user and token for the first one it encounters, which defeats the purpose of trying to merge it in the first place.

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@phillycheeze
Copy link

Can this be re-opened? The workaround is nice, but a built-in supported command would be way better.

@0xErnie
Copy link

0xErnie commented Nov 4, 2019

/reopen

@k8s-ci-robot
Copy link
Contributor

@0xErnie: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@richstokes
Copy link

richstokes commented Nov 12, 2019

Running into the same issue as above: The kubectl config view --flatten will only keep the user and token for the first one it encounters, which defeats the purpose of trying to merge it in the first place.

This is really annoying since its best practice to use unique tokens/accounts on different clusters, but this command is truncating them when trying to merge multiple accounts/contexts into one file.

@liggitt
Copy link
Member

liggitt commented Nov 12, 2019

The kubectl config view --flatten will only keep the user and token for the first one it encounters, which defeats the purpose of trying to merge it in the first place.

It keeps all uniquely named users/clusters. If you have user stanzas with the same name, keeping all of them would make the resulting file ambiguous.

$ cat k2
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:6443/
  name: cluster2
contexts:
- context:
    cluster: cluster2
    user: user2
  name: context2
kind: Config
preferences: {}
users:
- name: user2
  user:
    bearer-token: user2-token

$ cat k3
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:6443/
  name: cluster3
contexts:
- context:
    cluster: cluster3
    user: user3
  name: context3
kind: Config
preferences: {}
users:
- name: user3
  user:
    bearer-token: user3-token

$ KUBECONFIG=k2:k3 kubectl config view --flatten
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:6443/
  name: cluster2
- cluster:
    server: https://localhost:6443/
  name: cluster3
contexts:
- context:
    cluster: cluster2
    user: user2
  name: context2
- context:
    cluster: cluster3
    user: user3
  name: context3
current-context: ""
kind: Config
preferences: {}
users:
- name: user2
  user: {}
- name: user3
  user: {}

@richstokes
Copy link

So how can you merge multiple files that have the same username but different tokens?

@liggitt
Copy link
Member

liggitt commented Nov 12, 2019

So how can you merge multiple files that have the same username but different tokens?

you cannot; those are effectively the same user stanza, and can only have a single bearer token

@richstokes
Copy link

So if you run multiple clusters you should use a different/unique username on each? Very unintuitive.

@liggitt
Copy link
Member

liggitt commented Nov 12, 2019

If you want to merge them into a single kubeconfig, yes

flavio added a commit to flavio/kuberlr that referenced this issue Jun 12, 2020
It's possible to specify multiple kubeconfig files via `KUBECONFIG`, for
example: `KUBECONFIG=~/cluster1.yaml:~/cluster2.yaml`
See kubernetes/kubernetes#46381 (comment)

This commit ensures kuberlr can handle this special case in the same way
as kubectl does.
@didil
Copy link

didil commented Oct 29, 2020

as a note for others who are looking for an option to do this, there is now a tool to achieve this, also available as a krew plugin https://github.com/corneliusweig/konfig

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests