Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: preconfigure masters with KUBECONFIG #929

Closed
mfojtik opened this issue Dec 17, 2018 · 12 comments
Closed

RFE: preconfigure masters with KUBECONFIG #929

mfojtik opened this issue Dec 17, 2018 · 12 comments

Comments

@mfojtik
Copy link
Member

mfojtik commented Dec 17, 2018

It would be nice if all masters automatically set the KUBECONFIG env var pointing to the admin kubeconfig, so when an admin ssh into master node (for debugging/etc.) he don't need to search for admin.kubeconfig file, but kubectl/oc works automatically for him.

@abhinavdahiya
Copy link
Contributor

The installer give the cluster-admin admin kubeconfig on installation and IMO I don't think it is too much to ask the admn to scp that kubeconfig on the master machine to keep access.

@wking
Copy link
Member

wking commented Dec 18, 2018

I agree with @abhinavdahiya. I'd rather not be leaving high-power kubeconfigs around (although on the masters you probably have the power to look into etcd anyway), where they're one more thing to keep track of. And I'd rather not spend a lot of effort on the "ssh into machines and do things" use case. Is there a reason you can't just run your kubectl and oc from your install host?

@akostadinov
Copy link

If you have an automated (non-local) system it is actually rather hard to do thing properly if admin kubeconfig is not retrievable from master. In an automated system you'd have to archive that kubeconfig somehow, then know relationship between cluster and automated build, then also make sure unauthorized people cannot download it (e.g. people with only read access to builds).

It is much preferable if all cluster information can be discovered from master also for purposes of removing old clusters without searching for artifacts related to that particular cluster.

@wking
Copy link
Member

wking commented Dec 18, 2018

If you have an automated (non-local) system it is actually rather hard to do thing properly if admin kubeconfig is not retrievable from master.

So have your automated, non-local system scp whatever assets you need up to the cluster?

It is much preferable if all cluster information can be discovered from master also for purposes of removing old clusters without searching for artifacts related to that particular cluster.

We don't load-balance port 22. So this is "I'll be able to look up a master IP for this cluster to SSH in, but will not be able to look up anything else about the cluster"? Why are IPs especially available?

Also teardown using resources stored the cluster will usually be fine, but doesn't cover cases where the cluster died before the place you where hoping to store those resources got created. But we should probably continue this part of the discussion in #746.

@akostadinov
Copy link

You can discover IPs in test run logs, chats, looking at your cloud account, etc. Then everything is in one place.

SCP is surely possible if we know where to SCP. Is there information about all VMs created stored in install-dir after install? This can greatly help if we can have external/internal IPs of provisioned machines somewhere in install-dir as I don't know how we discover these without looking at cloud account.

@mfojtik
Copy link
Member Author

mfojtik commented Dec 18, 2018

I don't think that leaving cluster admin kubeconfig file on master nodes poses any security risk. In fact we already leaving it there in static pod manifests for kube apiserver and controller manager. However that path is pretty hard to discover and it changes with revision (and the path will be also subject of pruning old revisions...).

I thought that will be mostly UIX/convenience thing for debugging problems on masters. In case I need to SSH to investigate problems on master nodes, first thing I do is exporting KUBECONFIG so oc will work with the current cluster. In case you have slow internet connection, SSH into master might speed things up when interacting via oc. Also for the QE it might make execution of tests faster than relying on local<>remote connections. Copying the kubeconfig via scp in automation is an option, but I think that automating this for 'advanced' users will make the experience much better.

Although I dunno how complicated it will be to modify host environment (/etc/profile/...) so if this will be significant effort, I can live without it (as there are more important issues right now to solve).

@xingxingxia
Copy link

xingxingxia commented Dec 18, 2018

Thanks for your discussion. As the issue raiser, I agree "automating this for 'advanced' users will make the experience much better". And also agree "if this will be significant effort, I can live without it" either

@cgwalters
Copy link
Member

One thing I'd say is we could at least link to some sort of docs in the /etc/motd or something when ssh-ing into the node. Honestly I actually didn't even think about scp'ing the kubeconfig over; it's obvious when one thinks about it, but not everyone will.

@crawford
Copy link
Contributor

I agree with @abhinavdahiya and @wking above. While it may not be a security concern to leave root credentials laying around on the master, rotating those credentials is a problem. We (as an organization) have no mechanism for managing those files, nor do we have a plan to do so. Additionally, I don't buy the argument that archiving this information is too much of a hurdle for customers. At the end of the day, they will always need to archive something. Even if the kubeconfigs are on the masters, customers still need to archive their credentials to that cluster. If they can keep track of those credentials, I believe they can also keep track of this.

I agree that it would be a nice debugging UX, but given my concern around the unmanaged nature of these credentials, I'm going to close this out. We can revisit this if there is a good solution for management. Thanks for the request.

/close

@openshift-ci-robot
Copy link
Contributor

@crawford: Closing this issue.

In response to this:

I agree with @abhinavdahiya and @wking above. While it may not be a security concern to leave root credentials laying around on the master, rotating those credentials is a problem. We (as an organization) have no mechanism for managing those files, nor do we have a plan to do so. Additionally, I don't buy the argument that archiving this information is too much of a hurdle for customers. At the end of the day, they will always need to archive something. Even if the kubeconfigs are on the masters, customers still need to archive their credentials to that cluster. If they can keep track of those credentials, I believe they can also keep track of this.

I agree that it would be a nice debugging UX, but given my concern around the unmanaged nature of these credentials, I'm going to close this out. We can revisit this if there is a good solution for management. Thanks for the request.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@akostadinov
Copy link

can we have a simple command that one can execute on master to have kubeconfig regenerated to /root/.kube/config on demand? Thus automatic rotation would not be needed and still make things easy for adiministrators and QE.

@wking
Copy link
Member

wking commented Dec 20, 2018

can we have a simple command that one can execute on master...

This sounds reasonable to me, although I don't know how you'd implement it (hitting etcd directly?). Patches welcome :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants