Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keys/certificates generation for services #12732

Closed
jimmidyson opened this issue Aug 14, 2015 · 40 comments
Closed

Keys/certificates generation for services #12732

jimmidyson opened this issue Aug 14, 2015 · 40 comments
Labels
area/api Indicates an issue on api area. area/security area/usability kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.

Comments

@jimmidyson
Copy link
Member

It is normal for services within a cluster to require TLS & hence keys/certificates are required. Managing this is normally manual & time-consuming. I'd like to be able to generate keys/certificates on (authenticated) request & provide these to the relevant pods via secrets. This a common problem in a microservice architecture - great blog post on it here.

I have a working prototype building on top of cfssl. The basic idea is to watch secrets as they are added & if annotated correctly to update the sceret with generated keys & signed certificates via a cfssl pod (cfssl is bound to localhost so no external connectivity). These are then mounted inside pods that require them in the normal way.

Obviously this is very insecure at the moment as there is no idea of authorization, or of revocation, or of ... well anything other than simply usability. I am not advocating the approach I currently have, just explaining what I've done so far to start discussion.

This is related to some other issues around secret/token generation such as #11070 & #8866, but separate IMO, perhaps just a special case, but one that justifies it's own discussion.

/cc @liggitt @erictune

@brendandburns brendandburns added area/security team/master priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 14, 2015
@brendandburns
Copy link
Contributor

It would be awesome to add a key generation service, we should probably make it pluggable so that users can supply their own implementations as well.

@jimmidyson
Copy link
Member Author

So could we add this as another type of secret, ala service token, seeing as it's so commonly required & add pluggable generators? Right now generation of anything is in-process, but this seems to be something that should be requested externally rather than trying to embed a full CA inside kube master. This could in time plug in with letsencrypt.org when it's available, but also having a private implementation using cfssl or boulder. Can perhaps make the key/CSR generation static & just have a pluggable piece to send the request off?

@bgrant0607
Copy link
Member

cc @kubernetes/goog-cluster

@bprashanth
Copy link
Contributor

Can we get this to reach out to letsencrypt CA (https://letsencrypt.org/howitworks/technology/) and produce real certs for domains that we get through cloudprovider dns? haven't thought this through, just putting it out there

@thockin
Copy link
Member

thockin commented Sep 17, 2015

+1 to the idea - need it for docker registry SSL,

On Fri, Aug 14, 2015 at 7:53 AM, Jimmi Dyson notifications@github.com
wrote:

It is normal for services within a cluster to require TLS & hence
keys/certificates are required. Managing this is normally manual &
time-consuming. I'd like to be able to generate keys/certificates on
(authenticated) request & provide these to the relevant pods via secrets.
This a common problem in a microservice architecture - great blog post on
it here
https://blog.cloudflare.com/how-to-build-your-own-public-key-infrastructure/
.

I have a working prototype building on top of cfssl
https://github.com/cloudflare/cfssl. The basic idea is to watch secrets
as they are added & if annotated correctly to update the sceret with
generated keys & signed certificates via a cfssl pod (cfssl is bound to
localhost so no external connectivity). These are then mounted inside pods
that require them in the normal way.

Obviously this is very insecure at the moment as there is no idea of
authorization, or of revocation, or of ... well anything other than simply
usability. I am not advocating the approach I currently have, just
explaining what I've done so far to start discussion.

This is related to some other issues around secret/token generation such
as #11070 #11070 & #8866
#8866, but separate IMO,
perhaps just a special case, but one that justifies it's own discussion.

/cc @liggitt https://github.com/liggitt @erictune
https://github.com/erictune


Reply to this email directly or view it on GitHub
#12732.

@jimmidyson
Copy link
Member Author

@bprashanth Absolutely, although that will only work for services hosted on an owned domain obviously. I'm thinking we need this for internal services which would require an internal CA. Thinking of cfssl for this, even better that it's the base of letsencrypt.

I would like to try to implement this, but would really like some pointers on how to implement. My first thought was adding a controller, ala the old service account token generator, but that would leave empty secrets until they are generated. As there is no lifecycle for secrets this would lead to pod failures when they can't find the secrets they're expecting.

My next thought is to provide a service to do this generation using the third party types API. Watch specified resources, generate & create corresponding secrets. I can start hacking on that right away, unless someone thinks this would be better as a feature of kubernetes itself? The only way I can think of adding this in (remembering I don't know kubernetes) would be to add a lifecycle to secrets. This would have the added benefit of feeding into secret rotation, but would be more invasive into kubernetes code & probably less pluggable & extensible.

So far I have had to generate secrets for server keys/certificates, client keys/certificates, ssh key pairs (& being able to reference public keys from generated pairs as a separate secret), gpg keys & secure passphrases.

All comments appreciated!

@roberthbailey
Copy link
Contributor

@jimmidyson - While designing the system to generate signed certs for services we should also consider that at some point the master components will need to handle CSRs from nodes wanting to join the cluster (in dynamic clustering). It would be useful to work towards a solution that will support both requirements.

@jimmidyson
Copy link
Member Author

@roberthbailey Thanks for pointing that out. I can see cfssl helping hugely with that, handling multiple CAs to separate cluster CA trust chain from service CA trust chain, potentially even multiple service CAs in action. And it's handily written in go... Is this dynamic clustering going to be part of core binary or an add-on?

@roberthbailey
Copy link
Contributor

The plan is for the core system to support both static and dynamic clustering. Dynamic clustering would be the default for the "kick the tires" use case because it would require minimal initial work to get the cluster set up. Static cluster might still be preferred for deployments where you wanted to have tighter control over the certificates used in the cluster.

@bprashanth
Copy link
Contributor

Absolutely, although that will only work for services hosted on an owned domain obviously.

What I really want is to just create a single Ingress that sets everything up for a website. This involves:

  1. public dns
  2. certs
  3. loadbalancing to backend services

Setting up the backends is upto the developer (i.e the gmail or maps team is responsible for their backends, but don't worry about how traffic reaches the backend).

@bprashanth
Copy link
Contributor

@jimmidyson took a look at your WIP. It seems useful but why put it in the master? That makes anything other than exec-ing a local binary hard to achieve (eg: waiting for a CA to sign a CSR).

I think it would be more useful to define a claim-like resource that individual controllers can watch and fulfill. That way I can write a CA controller that generates 3 types of CA requests (eg: clusterlocal - signed by cluster CA, self, external). The secret created shows up in the status of the claim. Rough sketch:

// need a better name
request:
 crt:
  // some domain id (eg: google), 
  // kubernetes.io:nodename for kubelet?
  domain: foo.bar.com 
status:
 secret: fooSecret

That controller would generate the private key and csr, and wait tilll signed, then create the secret. Similarly, SSHRequest:

request:
 ssh:
  type: rsa
  keyLength: 2048
status:
 secret: fooSecret

And so on.

@bprashanth
Copy link
Contributor

Thinking about it a little more, you can still do this through a secret/request subreqource endpoint on the master, it's just not satisfied by the master but by an external controller.

@mikedanese
Copy link
Member

cc @gtank

@smarterclayton
Copy link
Contributor

This is about to merge to origin. Will update once we get more practical use (I plan on trying it with pet sets for automatic peer signing)

@erictune
Copy link
Member

Is OpenShift just doing this for servers, or for clients as well? The latter seems very useful too, but a little trickier. Servers are discovered by their DNS name, so that is sufficient. But clients may have multiple attributes that servers want to know about when authorizing them (pod name/namespace, controller it is part of, service account, labels, etc).

Related: @jbeda's [spiffe.io].

@smarterclayton
Copy link
Contributor

Just generating serving certs and propagating a CA into service account
secrets right now.

On Mon, Jun 20, 2016 at 12:19 PM, Eric Tune notifications@github.com
wrote:

Is OpenShift just doing this for servers, or for clients as well? The
latter seems very useful too, but a little trickier. Servers are discovered
by their DNS name, so that is sufficient. But clients may have multiple
attributes that servers want to know about when authorizing them (pod
name/namespace, controller it is part of, service account, labels, etc).

Related: @jbeda https://github.com/jbeda's [spiffe.io].


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#12732 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_p03WdpevXSVpbYQ1xjaXj_6MHE_dks5qNr16gaJpZM4FruMV
.

@smarterclayton
Copy link
Contributor

We are considering extending this to allow users to request secret generation of two secrets - one a CA.crt with private key, and one a client cert against that key. That would allow some common two party client auth setups to happen automatically. This turned out to be a requirement for things like elastic search and fluentd, where we want fluentd to have a client cert that is distinct from the server certs used by ES.

@hobti01
Copy link

hobti01 commented Mar 23, 2017

@smarterclayton Could you explain your considerations in more detail? Here's my interpretation of what you've mentioned and I'm getting stuck on "automatically" :)

  • Existing service cert functionality via annotations remains as-is
    • /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    • /my/secret/tls.crt and /my/secret/tls.key
  • "user" generates new-ca.crt and new-ca.key, client.crt and client.key (signed by new-ca)

Given a Kubernetes Service and a separate "client" Pod:

  • The client can "trust" the service via the typical process of comparing the signing cert to a known signer, which means comparing to /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
  • How does the Service trust the client?
    • Is new-ca.crt auto-added to /var/run/secrets/kubernetes.io/serviceaccount/new-ca.crt?
    • How is new-ca.crt integrated into the distro-specific ca-certificates, or server specific configurations/directories/keystores?
  • Are all client CAs trusted by all Services or is the list of trusted CAs annotated on or linked to the Service?

@deads2k
Copy link
Contributor

deads2k commented Mar 27, 2017

How does the Service trust the client?
Is new-ca.crt auto-added to /var/run/secrets/kubernetes.io/serviceaccount/new-ca.crt?
How is new-ca.crt integrated into the distro-specific ca-certificates, or server specific configurations/directories/keystores?

The new-ca.crt wouldn't be auto-added to the container. Instead a the pod would have to mount the secret (or configmap depending on how smart we make this) which contains the ca.crt that will verify the client secret that the platform signed with the CA it made.

Basically, we'd just be taking the manual "create signing key/cert pair, create client key/cert pair, and sign client key/cert pair" out of the equation. To rotate, you'd simply delete the original signer and things (ought) to cascade.

The secret creation, annotation, and mounting order problems remain. If we assume that pods can crash loop on missing secret content, I think it ought to work.

@hobti01
Copy link

hobti01 commented Mar 30, 2017

Thanks for that information.

Do you mean that the "server" pod crash loops, automatically adding new-ca.crt via a Secret?

From a UX perspective:

  1. Deploy a server with annotations to get a cert "automatically"
  2. Deploy a client with annotations to get a cert "automatically"
  • Get the name of the Secret containing the new-ca?
  1. Modify the server Pod to mount the new-ca Secret
  • Manually, because automatically would be a security risk?
  1. Deploy the server again

Repeat 2-4 for each client? I think that if all clients use the same CA and only repeat step 2 then that allows rogue clients.

@deads2k
Copy link
Contributor

deads2k commented Apr 11, 2017

Off the cuff, I think something like this works. It lets the server come up and separately bring up clients. The server doesn't have to restart on new clients, no modification is needed after creation, and the system eventually settles.

kind: Secret
metadata:
    name: client-ca
    annotations:
        create-client-ca-signer: "true"
kind: Secret
metadata:
    name: client-cert
    annotations:
        create-client-cert-from-signer: client-ca
kind: Pod
metadata:
    name: server
volume:
    # pod crash loops if this the content is empty
    secretMount: client-ca
kind: Pod
metadata:
    name: client
volume:
    # pod crash loops if this the content is empty
    secretMount: client-cert

@mikedanese
Copy link
Member

@0xmichalis 0xmichalis added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed team/cluster (deprecated - do not use) labels Apr 11, 2017
@thockin
Copy link
Member

thockin commented Jun 5, 2017

I've been looking at this problem from the Service side. It seems useful for kube to optionally generate a CA and serving cert for a Service, so clients can trust that the Service they are talking to is the service they expect.

@smarterclayton
Copy link
Contributor

Yeah, had planned to fold this into the container / service identity working group discussion, that email will go out in the next day or two.

@smarterclayton
Copy link
Contributor

This is fully implemented in openshift and is going to move into GA as soon as we figure out what the istio / SPIFFEE alignment will be (spiffee has an analogue, as does istio)

@dnascimento
Copy link

@smarterclayton @deads2k Any predictions about merging this in Kubernetes?
openshift/origin#7728

@hobti01
Copy link

hobti01 commented Oct 8, 2017

@thockin are you influencing an alignment of this request with https://github.com/PalmStoneGames/kube-cert-manager or https://github.com/jetstack-experimental/cert-manager?

Each effort sees the value of certs for services with various approaches.

@munnerz are you accommodating the current state of the OpenShift approach with your incubator proposal? cert-manager/cert-manager#50

It's a genuine challenge at the moment to choose a stable and maintained implementation.

@mikedanese mikedanese added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Oct 9, 2017
@liggitt liggitt added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 6, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/security area/usability kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects
None yet
Development

No branches or pull requests