Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubectl config merge command #46381

Closed
henriquetruta opened this issue May 24, 2017 · 37 comments
Closed

Add kubectl config merge command #46381

henriquetruta opened this issue May 24, 2017 · 37 comments

Comments

@henriquetruta
Copy link
Contributor

@henriquetruta henriquetruta commented May 24, 2017

gcloud/GKE has an efficient way of getting credentials from somewhere by using the get-credentials command that automatically merges the content of a remote cluster in a local kubeconfig file. We want to have something similar in k8s. A first approach would be having two kubeconfig files locally and being able to merge the content of one into the other without needing to do manual copy/paste.

We could have a command that looks like:
kubectl config merge <source> <target>

Where the target file defaults to ~/.kube/config and source would be some kubeconfig file that the user wants to have its entries (users, contexts and clusters) copied to the target file.

If there is no name conflict, i.e., the entries in source don’t exist in target, the output is quite straightforward. However, if there is some conflict, we need to decide on how to approach. Do anything and inform the user? Rename them in target? Copy only those with no conflict? Have this options as parameters (--ignore, --rename, --abort-if-conflict, etc)
I suggest having for now, for the sake of simplicity doing anything and informing the user. The others can me made in follow-up issues

@henriquetruta
Copy link
Contributor Author

@henriquetruta henriquetruta commented May 24, 2017

@nikhiljindal @madhusudancs can you take a look?

@superbrothers
Copy link
Member

@superbrothers superbrothers commented May 25, 2017

Can kubectl config view meet your requirements?

# use multiple kubeconfig files at the same time and view merged config
$ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

@madhusudancs
Copy link
Contributor

@madhusudancs madhusudancs commented May 25, 2017

kubectl config view might not work as is as it redacts certificate data. kubectl config view --flatten might, but I have never tried that with multiple kubeconfigs.

cc @kubernetes/sig-cli-feature-requests

@shiywang
Copy link
Contributor

@shiywang shiywang commented May 25, 2017

@madhusudancs @superbrothers @henriquetruta
KUBECONFIG=~/.kube/config:~/config kubectl config view --flatten works, but I think adding this kubectl config merge is also ok, I can implement this since I'm working on another kubectl config subcommand
cc @kubernetes/sig-cli-feature-requests

@madhusudancs
Copy link
Contributor

@madhusudancs madhusudancs commented May 25, 2017

@shiywang it is up to SIG-CLI to decide.

@shiywang
Copy link
Contributor

@shiywang shiywang commented May 25, 2017

@madhusudancs sure, we'll wait for @kubernetes/sig-cli-maintainers to decide.

@henriquetruta
Copy link
Contributor Author

@henriquetruta henriquetruta commented May 25, 2017

@shiywang thanks! I've already started to implement it here

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Jun 21, 2017

See also #9298, #10693, #20605, #30395

@caesarxuchao
Copy link
Member

@caesarxuchao caesarxuchao commented Aug 9, 2017

/sub

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jan 2, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@PaulCharlton
Copy link

@PaulCharlton PaulCharlton commented Mar 7, 2018

/remove-lifecycle stale

@epcim
Copy link

@epcim epcim commented Mar 16, 2018

@superbrothers commented on May 25, 2017, 7:27 AM GMT+2:

Can kubectl config view meet your requirements?

# use multiple kubeconfig files at the same time and view merged config
$ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view

https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/

works, except the fact your yaml config starts with --- 👎

@HighwayofLife
Copy link

@HighwayofLife HighwayofLife commented Mar 16, 2018

@epcim

KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view --flatten > ~/.kube/config

Does not have a yaml config starting with ---, however, it still works even when yaml starts with that separator.

@anthonydahanne
Copy link

@anthonydahanne anthonydahanne commented Apr 9, 2018

KUBECONFIG=/.kube/config:/.kube/kubconfig2 kubectl config view --flatten > ~/.kube/config

well, that almost worked for me, except that... only partial content was copied ! (writing while reading seemed to have messed up the final output !)
I had better luck with a temporary file :

KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view --flatten > mergedkub && mv mergedkub ~/.kube/config

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jul 8, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@anthonydahanne
Copy link

@anthonydahanne anthonydahanne commented Jul 9, 2018

/remove-lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Oct 7, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@remyleone
Copy link

@remyleone remyleone commented Oct 7, 2018

/remove-lifecycle stale

@omerfsen
Copy link

@omerfsen omerfsen commented Dec 18, 2018

There is a problem that KUBECONFIG does not understands special chars like ~ (home dir path) so use full path. for example:

KUBECONFIG=/home/serra/.kube/config:/home/serra/.kube/config.baremetal kubectl config view --flatten > /home/serra/.kube/config

@liggitt
Copy link
Member

@liggitt liggitt commented Dec 18, 2018

There is a problem that KUBECONFIG does not understands special chars like ~ (home dir path) so use full path. for example:

KUBECONFIG=/home/serra/.kube/config:/home/serra/.kube/config.baremetal kubectl config view --flatten > /home/serra/.kube/config

Most shells expand that for you:

$ more ~/k1
apiVersion: v1
clusters:
- cluster:
    server: https://1.1.1.1
  name: cluster1
contexts:
- context:
    cluster: cluster1
    user: user1
  name: context1
current-context: context1
kind: Config
preferences: {}
users:
- name: user1

$ more ~/k2
apiVersion: v1
clusters:
- cluster:
    server: https://2.2.2.2
  name: cluster2
contexts:
- context:
    cluster: cluster2
    user: user2
  name: context2
current-context: context2
kind: Config
preferences: {}
users:
- name: user2

$ KUBECONFIG=~/k1:~/k2 kubectl config view
apiVersion: v1
clusters:
- cluster:
    server: https://1.1.1.1
  name: cluster1
- cluster:
    server: https://2.2.2.2
  name: cluster2
contexts:
- context:
    cluster: cluster1
    user: user1
  name: context1
- context:
    cluster: cluster2
    user: user2
  name: context2
current-context: context1
kind: Config
preferences: {}
users:
- name: user1
  user: {}
- name: user2
  user: {}
@MichaelSp
Copy link

@MichaelSp MichaelSp commented Feb 7, 2019

My little bash script based on #46381 (comment)

# ~/.bashrc

function kmerge() {
  KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}
@drewwells
Copy link

@drewwells drewwells commented Apr 5, 2019

kubectl config merge would be easier to remember. First class support would be nice even though the workaround is clever

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jul 4, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Aug 3, 2019

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@eroji
Copy link

@eroji eroji commented Aug 31, 2019

This does not seem to work if you have the same user but different token for the user for different clusters. The kubectl config view --flatten will only keep the user and token for the first one it encounters, which defeats the purpose of trying to merge it in the first place.

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Sep 30, 2019

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 30, 2019

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@phillycheeze
Copy link

@phillycheeze phillycheeze commented Oct 27, 2019

Can this be re-opened? The workaround is nice, but a built-in supported command would be way better.

@0xErnie
Copy link

@0xErnie 0xErnie commented Nov 4, 2019

/reopen

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Nov 4, 2019

@0xErnie: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@richstokes
Copy link

@richstokes richstokes commented Nov 12, 2019

Running into the same issue as above: The kubectl config view --flatten will only keep the user and token for the first one it encounters, which defeats the purpose of trying to merge it in the first place.

This is really annoying since its best practice to use unique tokens/accounts on different clusters, but this command is truncating them when trying to merge multiple accounts/contexts into one file.

@liggitt
Copy link
Member

@liggitt liggitt commented Nov 12, 2019

The kubectl config view --flatten will only keep the user and token for the first one it encounters, which defeats the purpose of trying to merge it in the first place.

It keeps all uniquely named users/clusters. If you have user stanzas with the same name, keeping all of them would make the resulting file ambiguous.

$ cat k2
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:6443/
  name: cluster2
contexts:
- context:
    cluster: cluster2
    user: user2
  name: context2
kind: Config
preferences: {}
users:
- name: user2
  user:
    bearer-token: user2-token

$ cat k3
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:6443/
  name: cluster3
contexts:
- context:
    cluster: cluster3
    user: user3
  name: context3
kind: Config
preferences: {}
users:
- name: user3
  user:
    bearer-token: user3-token

$ KUBECONFIG=k2:k3 kubectl config view --flatten
apiVersion: v1
clusters:
- cluster:
    server: https://localhost:6443/
  name: cluster2
- cluster:
    server: https://localhost:6443/
  name: cluster3
contexts:
- context:
    cluster: cluster2
    user: user2
  name: context2
- context:
    cluster: cluster3
    user: user3
  name: context3
current-context: ""
kind: Config
preferences: {}
users:
- name: user2
  user: {}
- name: user3
  user: {}
@richstokes
Copy link

@richstokes richstokes commented Nov 12, 2019

So how can you merge multiple files that have the same username but different tokens?

@liggitt
Copy link
Member

@liggitt liggitt commented Nov 12, 2019

So how can you merge multiple files that have the same username but different tokens?

you cannot; those are effectively the same user stanza, and can only have a single bearer token

@richstokes
Copy link

@richstokes richstokes commented Nov 12, 2019

So if you run multiple clusters you should use a different/unique username on each? Very unintuitive.

@liggitt
Copy link
Member

@liggitt liggitt commented Nov 12, 2019

If you want to merge them into a single kubeconfig, yes

flavio added a commit to flavio/kuberlr that referenced this issue Jun 12, 2020
It's possible to specify multiple kubeconfig files via `KUBECONFIG`, for
example: `KUBECONFIG=~/cluster1.yaml:~/cluster2.yaml`
See kubernetes/kubernetes#46381 (comment)

This commit ensures kuberlr can handle this special case in the same way
as kubectl does.
@didil
Copy link

@didil didil commented Oct 29, 2020

as a note for others who are looking for an option to do this, there is now a tool to achieve this, also available as a krew plugin https://github.com/corneliusweig/konfig

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet