New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve documentation on how authentication is handled in Kubernetes #11000
Comments
@antoineco explained (#10265 (comment)) the certificate part a little bit more and wrote about what to do when creating a cluster:
|
By using the script mentioned in the previous comment, you can easily generate everything in one shot. Our unit looks like this:
Then make your apiserver and controller manager units depend on it. Example:
Yes
Use the kubercfg.crt/key pair for that purpose, or generate one pair per client to make it easier to revoke. In your PKI folder:
No, you have to find a way to distribute them. Since you're running on CoreOS, why don't you use ectd? Your certs are nothing but plain text files after all. You can take a look at what's inside the cluster/ folder: on GCE the ca cert and server cert/key pair are distributed via salt, while other providers like AWS still use good old tokens but this is being worked out by the Kubernetes team if I'm not mistaken.
I think you have to figure it out for yourself, this is outside the Kubernetes topic. Don't distribute certs to all your servers. Keep control over your server fleet. etc. Personally we distribute certs to our CoreOS boxes using a central storage (S3 bucket on AWS) and leverage IAM roles (again some AWS mechanisms) to decide which instance can access what. Just a suggestion.
Already answered. Use a CA to generate and sign certificates. Systemd might not be appropriate for that, unless you know in advance which certs you want to generate, or use a single client cert that you will use on every client.
Something else you have to figure out for yourself, only you can decide what you do with your CA.
Yes, certificate or token (hint: use certs if possible) |
good stuff |
Ok, let's assume I have setup CA and client certificates on a separate machine and now want to place them onto the appropriate nodes in my CoreOS cluster.
I thought about this, too. But etcd has no concept of authorisation (according to their website "Access control lists (ACLs) will be added to etcd in the near future."), so for security reasons I cannot place the certificate keys into etcd and then make each node fetch its own. It would work to distribute the CA cert, though.
I thought about writing global Fleet services to download each node's certs and keys using rsync (mainly because that is easily available on the CoreOS host), which I would secure using the
From the kubeconfig docs it seems that kubeconfig is for client-side configuration only. It does not replace |
A kubeconfig file should be used with every "client" as long as authentication is involved, this includes kubelet and kube-proxy. So you'll somehow have to fetch/generate these config files on your nodes, unless you disabled SSL within your cluster. |
@devurandom I though this might interest you https://coreos.com/blog/introducing-etcd-2.1/ |
@devurandom I run ./make-ca-cert.sh script but nothing happened.. /srv/kubernetes/ is a empty directory. do you know why |
@fzu-huang
|
better run without |
@dalanlan |
Not have been there. |
ubuntu 14.04 can't not generate cert:
|
@Icedroid You need to use current master of OpenVPN/easy-rsa. v3.0.0-rc2 contains a bug which makes it impossible to specify subject-alternate-names. |
You need to write IP in capital letters. |
+cc @roberthbailey @devurandom would you be willing to send a PR to help update our docs for this? |
@antoineco I already used IP capital: After I run sudo apt-get update && sudo apt-get -y upgrade, the Easy-RSA error gone. |
@mbforbes Very happily, but first I need to get it fully running myself. At the moment my installation still seems to be a bit fragile. |
|
/cc @mikedanese (for your first question) For the second question, it should work with different credentials per kubelet, but we don't do this today using any of the automated cluster creation scripts. The part that isn't ready yet is the authorization framework in the apiserver that can differentiate clients from each other. Right now having any valid credentials gives you complete access to the cluster so there isn't a lot of value in having different credentials per kubelet. |
In answer to one: By default, kube2sky will use the inClusterConfig system to configure the secure connection. This requires kubernetes version You can also run kube2sky over http if you have an insecure cluster setup. |
@mikedanese An addition to your instructions: If one had ever used kube-controller-manager without the |
I think one thing that should be documented better is that without the |
This is obviously an old issue, but adding some comments about our own experience. I'm building out a highly available Kubernetes cluster in both AWS and on bare metal servers. Thus, multiple API servers. We're using Hashicorp Vault and their PKI backend to generate client and server certs for the different components. The way we had it configured, each API host had its own server cert and private key, but they were all signed by the same CA. Then we created client certs signed by the same CA, and distributed those to the Node hosts, and other clients that wanted to connect to the hosts via kubectl. So far so good. However, when running something like SkyDNS, it was failing to authenticate with the API servers using a ServiceAccount. After some debugging, I think the issue is that since the private keys are different on each API host, one of them is generating the ServiceAccount tokens that are put into Secrets. But if a request goes to the other API host, it fails to validate the secret. So the solution seems to be that the public/private keypair you use for generating and validating the ServiceAccount tokens need to be the same across the API hosts. (Please correct me if I'm wrong about this.) This makes sense now that we've figured this out, but it took a lot of head-desking to figure out what was going on here. |
What you said makes sense. I hope your head feels better! |
I have 3 masters running. How do i generate certs for 3 masters? I tried on one of the master and copied the files to other 2 servers, but kube-controller-manager and kube-apiserver failed to startup
|
@gvenka008c that's a question for Stackoverflow. Make sure you reference the certificate and not only the key. Add logs showing the error if this doesn't fix it (on Stackoverflow please) |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is a follow up from #10265 (comment)
There I was asking about documentation on serviceaccounts and certificates. I have already read http://docs.k8s.io/design/service_accounts.md (and a few others), but still there are lots of questions:
--root-ca-file=
authenticated by the key in--service-account-key-file=
?--service-account-key-file=
on the host automatically via a systemd.service unit, unless I also automatically create the--root-ca-file=
via a systemd.service, correct?ca.crt
secret in my default serviceaccount the same as the one I would hand to the kube-controller-manager via--root-ca-file=
?--root-ca-file=
?write_files
directive forcloud-config
appears unflexible and cumbersome on its own.The following question was already answered by @satnam6502 in #10265 (comment), but I mention it here for completeness (in case someone wants to write down docs or a FAQ):
token
of the serviceaccount being used for?The text was updated successfully, but these errors were encountered: