Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fetch Kubeconfig From Master via SSH #95

Closed
jessicaochen opened this issue May 11, 2018 · 13 comments
Closed

Fetch Kubeconfig From Master via SSH #95

jessicaochen opened this issue May 11, 2018 · 13 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@jessicaochen
Copy link
Contributor

Currently fetching kubeconfig for master is a per-provider implementation in the deployer. However, we could have a generic default implementation if we do the following:

  1. Have the deployer generate/get a SSH key from the user
  2. Have the deployer provide said SSH key to the machine controller (perhaps a mounted secret?)
  3. Have the machine actuator apply the SSH key to provisioned machines
  4. The deployer can then use the SSH key to access the master and get the kubeconfig (Note that it can get the IP from the cluster object 🐛 fix: fix reading of semver for kube version  #158)

Having a provider-specific way to get kubeconfig should be an option and not a requirement.

@jessicaochen jessicaochen self-assigned this May 11, 2018
@dims
Copy link
Member

dims commented Jun 11, 2018

@kfox1111
Copy link

I don't know of a reliable way of dealing with man in the middle attacks on the ssh host key fingerprint without being vendor specific. Even then, its rather sketchy.

@roberthbailey
Copy link
Contributor

@dims - #122 suggests getting rid of ssh. This issue proposes having a provider independent way to do the ssh. They aren't duplicates, they are mutually exclusive; we need to implement one or the other (but not both).

Maybe we can chat during the meeting tomorrow which approach we'd like to pursue.

/cc @karan

@ashish-amarnath
Copy link

ashish-amarnath commented Jul 19, 2018

Using something similar to:

  • kubectl config set-cluster <CONTEXT_NAME> --server <API_SERVER_ADDR> --certificate-authority <CA_CERT>
  • kubectl config set-credentials <USER_NAME> <CREDS>

It should be possible to make this provider independent.
What do other folks think?

@roberthbailey
Copy link
Contributor

@ashish-amarnath - the question is about how to get the credentials to be consistent between the client and the server. The current implementation allows the server to generate the credentials (deferring to kubeadm init) and then uses ssh to copy those credentials to the client. An alternative that we discussed during the working group meeting last week was to generate the credentials on the client and then pass them down to the server. In either case, we need a transport mechanism to distribute the secrets to the two parties during control plane initialization.

@ashish-amarnath
Copy link

@roberthbailey thanks for that explanation :)
If the client is just bootstrapping the cluster, then, IMO, it would be more intuitive for the certs to be generated on the client and transported to the servers.
However, if the client is also responsible for managing the cluster, think like upgrading, auditing, etc, then, the client would be better off pulling certs from the server as it would allow us to use a different client to perform each of the management actions on the cluster.

Disclaimer: I may be changing the scope with which this issue was created and I also may not, at the moment, fully understand other scenarios that we are designing for.

@roberthbailey
Copy link
Contributor

Right now the client is bootstrapping the cluster. We have some basic upgrade support for the control plane, but it needs to be better flushed out (for nodes we can do updates via MachineDeployments).

One thing that you may have run across in the meeting notes is how we will transition to a highly available control plane. At that point, we will need to have multiple instances of the control plane using a consistent set of credentials. So our solution, while not necessarily needing to solve that problem today, should move us towards making that easier to solve rather than harder.

@scruplelesswizard
Copy link

Has there been any consideration of environments that disable SSH on the machines? There are a few environments I have worked in that do this to avoid complexities for regulatory reasons, which would prevent us from leveraging this retrieval method.

@roberthbailey
Copy link
Contributor

@chaosaffe - see https://github.com/kubernetes-sigs/cluster-api/issues/122. This issue and that one are mutually exclusive, as I mentioned above. If you have ideas about how to get rid of ssh, please add them to #122 and I'd be more than happy to close this issue in favor of fixing that one.

@roberthbailey roberthbailey transferred this issue from kubernetes-sigs/cluster-api Jan 10, 2019
@roberthbailey roberthbailey added this to the Next milestone Jan 11, 2019
@roberthbailey roberthbailey added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jan 11, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 28, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

8 participants