-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keys/certificates generation for services #12732
Comments
It would be awesome to add a key generation service, we should probably make it pluggable so that users can supply their own implementations as well. |
So could we add this as another type of secret, ala service token, seeing as it's so commonly required & add pluggable generators? Right now generation of anything is in-process, but this seems to be something that should be requested externally rather than trying to embed a full CA inside kube master. This could in time plug in with letsencrypt.org when it's available, but also having a private implementation using cfssl or boulder. Can perhaps make the key/CSR generation static & just have a pluggable piece to send the request off? |
cc @kubernetes/goog-cluster |
Can we get this to reach out to letsencrypt CA (https://letsencrypt.org/howitworks/technology/) and produce real certs for domains that we get through cloudprovider dns? haven't thought this through, just putting it out there |
+1 to the idea - need it for docker registry SSL, On Fri, Aug 14, 2015 at 7:53 AM, Jimmi Dyson notifications@github.com
|
@bprashanth Absolutely, although that will only work for services hosted on an owned domain obviously. I'm thinking we need this for internal services which would require an internal CA. Thinking of cfssl for this, even better that it's the base of letsencrypt. I would like to try to implement this, but would really like some pointers on how to implement. My first thought was adding a controller, ala the old service account token generator, but that would leave empty secrets until they are generated. As there is no lifecycle for secrets this would lead to pod failures when they can't find the secrets they're expecting. My next thought is to provide a service to do this generation using the third party types API. Watch specified resources, generate & create corresponding secrets. I can start hacking on that right away, unless someone thinks this would be better as a feature of kubernetes itself? The only way I can think of adding this in (remembering I don't know kubernetes) would be to add a lifecycle to secrets. This would have the added benefit of feeding into secret rotation, but would be more invasive into kubernetes code & probably less pluggable & extensible. So far I have had to generate secrets for server keys/certificates, client keys/certificates, ssh key pairs (& being able to reference public keys from generated pairs as a separate secret), gpg keys & secure passphrases. All comments appreciated! |
@jimmidyson - While designing the system to generate signed certs for services we should also consider that at some point the master components will need to handle CSRs from nodes wanting to join the cluster (in dynamic clustering). It would be useful to work towards a solution that will support both requirements. |
@roberthbailey Thanks for pointing that out. I can see cfssl helping hugely with that, handling multiple CAs to separate cluster CA trust chain from service CA trust chain, potentially even multiple service CAs in action. And it's handily written in go... Is this dynamic clustering going to be part of core binary or an add-on? |
The plan is for the core system to support both static and dynamic clustering. Dynamic clustering would be the default for the "kick the tires" use case because it would require minimal initial work to get the cluster set up. Static cluster might still be preferred for deployments where you wanted to have tighter control over the certificates used in the cluster. |
What I really want is to just create a single Ingress that sets everything up for a website. This involves:
Setting up the backends is upto the developer (i.e the gmail or maps team is responsible for their backends, but don't worry about how traffic reaches the backend). |
@jimmidyson took a look at your WIP. It seems useful but why put it in the master? That makes anything other than exec-ing a local binary hard to achieve (eg: waiting for a CA to sign a CSR). I think it would be more useful to define a claim-like resource that individual controllers can watch and fulfill. That way I can write a CA controller that generates 3 types of CA requests (eg: clusterlocal - signed by cluster CA, self, external). The secret created shows up in the status of the claim. Rough sketch:
That controller would generate the private key and csr, and wait tilll signed, then create the secret. Similarly, SSHRequest:
And so on. |
Thinking about it a little more, you can still do this through a secret/request subreqource endpoint on the master, it's just not satisfied by the master but by an external controller. |
cc @gtank |
This is about to merge to origin. Will update once we get more practical use (I plan on trying it with pet sets for automatic peer signing) |
Is OpenShift just doing this for servers, or for clients as well? The latter seems very useful too, but a little trickier. Servers are discovered by their DNS name, so that is sufficient. But clients may have multiple attributes that servers want to know about when authorizing them (pod name/namespace, controller it is part of, service account, labels, etc). Related: @jbeda's [spiffe.io]. |
Just generating serving certs and propagating a CA into service account On Mon, Jun 20, 2016 at 12:19 PM, Eric Tune notifications@github.com
|
We are considering extending this to allow users to request secret generation of two secrets - one a CA.crt with private key, and one a client cert against that key. That would allow some common two party client auth setups to happen automatically. This turned out to be a requirement for things like elastic search and fluentd, where we want fluentd to have a client cert that is distinct from the server certs used by ES. |
@smarterclayton Could you explain your considerations in more detail? Here's my interpretation of what you've mentioned and I'm getting stuck on "automatically" :)
Given a Kubernetes Service and a separate "client" Pod:
|
The Basically, we'd just be taking the manual "create signing key/cert pair, create client key/cert pair, and sign client key/cert pair" out of the equation. To rotate, you'd simply delete the original signer and things (ought) to cascade. The secret creation, annotation, and mounting order problems remain. If we assume that pods can crash loop on missing secret content, I think it ought to work. |
Thanks for that information. Do you mean that the "server" pod crash loops, automatically adding From a UX perspective:
Repeat 2-4 for each client? I think that if all clients use the same CA and only repeat step 2 then that allows rogue clients. |
Off the cuff, I think something like this works. It lets the server come up and separately bring up clients. The server doesn't have to restart on new clients, no modification is needed after creation, and the system eventually settles. kind: Secret
metadata:
name: client-ca
annotations:
create-client-ca-signer: "true"
kind: Secret
metadata:
name: client-cert
annotations:
create-client-cert-from-signer: client-ca
kind: Pod
metadata:
name: server
volume:
# pod crash loops if this the content is empty
secretMount: client-ca
kind: Pod
metadata:
name: client
volume:
# pod crash loops if this the content is empty
secretMount: client-cert |
I've been looking at this problem from the Service side. It seems useful for kube to optionally generate a CA and serving cert for a Service, so clients can trust that the Service they are talking to is the service they expect. |
Yeah, had planned to fold this into the container / service identity working group discussion, that email will go out in the next day or two. |
This is fully implemented in openshift and is going to move into GA as soon as we figure out what the istio / SPIFFEE alignment will be (spiffee has an analogue, as does istio) |
@smarterclayton @deads2k Any predictions about merging this in Kubernetes? |
@thockin are you influencing an alignment of this request with https://github.com/PalmStoneGames/kube-cert-manager or https://github.com/jetstack-experimental/cert-manager? Each effort sees the value of certs for services with various approaches. @munnerz are you accommodating the current state of the OpenShift approach with your incubator proposal? cert-manager/cert-manager#50 It's a genuine challenge at the moment to choose a stable and maintained implementation. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
It is normal for services within a cluster to require TLS & hence keys/certificates are required. Managing this is normally manual & time-consuming. I'd like to be able to generate keys/certificates on (authenticated) request & provide these to the relevant pods via secrets. This a common problem in a microservice architecture - great blog post on it here.
I have a working prototype building on top of cfssl. The basic idea is to watch secrets as they are added & if annotated correctly to update the sceret with generated keys & signed certificates via a cfssl pod (cfssl is bound to localhost so no external connectivity). These are then mounted inside pods that require them in the normal way.
Obviously this is very insecure at the moment as there is no idea of authorization, or of revocation, or of ... well anything other than simply usability. I am not advocating the approach I currently have, just explaining what I've done so far to start discussion.
This is related to some other issues around secret/token generation such as #11070 & #8866, but separate IMO, perhaps just a special case, but one that justifies it's own discussion.
/cc @liggitt @erictune
The text was updated successfully, but these errors were encountered: