Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

encrypt secrets when in etcd. #12742

Closed
erictune opened this issue Aug 14, 2015 · 43 comments
Closed

encrypt secrets when in etcd. #12742

erictune opened this issue Aug 14, 2015 · 43 comments
Assignees
Labels
area/secret-api area/security priority/backlog Higher priority than priority/awaiting-more-evidence. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.

Comments

@erictune
Copy link
Member

We should limit access to secrets when in etcd.

Initially discussed in #11937.

Although we discourage it, people seem to want to build their own clusters where etcd is used by both Apiserver and by other components, to store configuration. Secrets want to have Kubernetes access controls on them, and not to be widely readable. Other types of configuration wants to be widely readable.

We could do this in a couple of ways:

  1. use etcd ACLs to limit access to the etcd keys that hold secrets.
  2. encrypt some or all of the secret data when stored.

@liggitt @pmorie
thoughts?

@liggitt
Copy link
Member

liggitt commented Aug 14, 2015

my first instinct would be to use etcd ACLs to subdivide the etcd instance, since indiscriminate write access to etcd would be just as much of a compromise (write yourself a privileged pod scheduled to every node that reads node credentials and asks the master API for all secrets... done)

@erictune
Copy link
Member Author

related issue is using https://github.com/hashicorp/vault to store secrets #10439

At first glance, changing our storage seems easier than integrating with something that has a non-Kuberntes API.

@liggitt
Copy link
Member

liggitt commented Aug 14, 2015

that said, I also like the idea of encrypting secrets at rest in etcd (actually, I'd like the ability to do that generically on any object, since we also store things like OAuth access tokens in etcd)

@erictune
Copy link
Member Author

We might actually want to limit read and write on all apiserver storage, not just secrets. That would discourage people from depending on our storage implementation.

@liggitt
Copy link
Member

liggitt commented Aug 14, 2015

+1 for preventing end-runs around API conversion/validation. think of the support calls :)

@brendandburns brendandburns added team/master sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. area/security priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Aug 14, 2015
@ghost ghost added team/control-plane and removed team/master sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Aug 19, 2015
@jefferai
Copy link

jefferai commented Sep 3, 2015

Probably whether it makes sense to use something like Vault vs. simply encrypting data at rest in any Kubernetes storage solution depends on whether the extra things Vault does are useful (fine-grained access control, tokens with TTLs and limits on the number of uses, generation of short-lived access to external resources etc.) It's really designed for a wide set of use-cases that may simply not be needed by Kubernetes.

There are other tickets for ZooKeeper/Consul support instead of etcd for backing K/V storage, so it probably makes sense for most secrets to simply encrypt what's being stored into the K/V store.

That said you then still have the problem of the master keys -- where they should be stored and who has access -- and that is both a hard problem and a place where Vault could be useful, especially using the transit backend (https://vaultproject.io/docs/secrets/transit/index.html). It's going to gain support for key generation (so that the private key is never exposed but kept locked in Vault) and key rotation/rollover pretty soon, which will make it work quite well as a KEK service. Those that don't want to use Vault or a similar solution could simply make do with ACLs on a master key stored in the K/V store, or some similar approach.

@davidopp
Copy link
Member

This is marked P1 but hasn't been touched in months,. @erictune can you find an owner or bump it to P2?

@erictune erictune added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Dec 18, 2015
@erictune
Copy link
Member Author

erictune commented Mar 1, 2016

I may have someone to work on this.

I'm told @smarterclayton is working on code that (1) wraps all objects stored in etcd in a "wrapper" object, and (2) allows different "encodings" to be applied to the "inner" object stored in etcd, such as a protocol buffer binary wire format encoding, and (3) the outer or wrapper object can record what encoding(s) were applied to the inner object.

@smarterclayton is the above correct, and can you point at an issue or PR and/or give an ETA for said feature? 1.3?

Assuming that feature is available, the idea would be to have encryption be one form of encoding that can be applied to "inner" objects, and to apply it to all secret objects.

I don't want to get into any specifics about keys and ciphers yet. Just confirming that the encoding support is going to line up.

@erictune
Copy link
Member Author

erictune commented Mar 1, 2016

@lavalamp

@erictune
Copy link
Member Author

erictune commented Mar 1, 2016

@pmoire if you know about serializers status

@liggitt
Copy link
Member

liggitt commented Mar 1, 2016

cc @pmorie

@lavalamp
Copy link
Member

lavalamp commented Mar 1, 2016

@wojtek-t wants the protobuf work to happen, too. 1.3 is a safe bet for this.

@wojtek-t
Copy link
Member

wojtek-t commented Mar 2, 2016

@erictune - yes - you can assume that support for protobufs will be done for 1.3

@erictune
Copy link
Member Author

erictune commented Mar 3, 2016

One issue with the serializer approach. It would not allow the secrets to be stored in an HSM, which some customers have asked for. That would require a different storage backend, rather than a different serializer. I'm going to try to meet with some HSM users to understand the use case better. Let me know if you have experience and are interested.

@erictune
Copy link
Member Author

Here is an alternative that can be implemented by end-users now, rather than waiting for this feature to be implemented.

Assume you have a special secret that you never want to hit the disk, including via etcd.
You can do this:

  1. have an init script on each node that makes a tmpfs, say at /mnt/foo.
  2. Have some process by which, when a node boots, a secret is copied into /mnt/foo, say /mnt/foo/special_secret.txt.
  3. Pods that need the special secret have a volume spec with hostPath: /mnt/foo.

This is a little bit clunky, but it does allow people with strict rules around secrets to start using Kubernetes. Two use cases I am thinking of are when special_secret.txt is a credential used to talk to a HSM, to get other secrets, or a secret used to auth to Vault, to get secrets from Vault.

This pattern also might work in conjunction with https://github.com/UKHomeOffice/vault-sidekick

@erictune erictune added sig/auth Categorizes an issue or PR as relevant to SIG Auth. and removed area/security labels Apr 12, 2016
@evie404
Copy link
Contributor

evie404 commented Jul 22, 2016

@erictune: in the workaround, wouldn't this mean the nodes need to store on disk the superset of secrets needed by all pods that may be scheduled? also, preventing pods from mounting specific host paths, which may expose it secrets it's not supposed to have access to, would require additional logic.

@erictune
Copy link
Member Author

@rickypai

In this scenario, the node only stores one, or a few secrets. Those secrets are used to get additional secrets from vault. Those secrets would not be stored on disk but on tmpfs. On node reboot, and admin needs to ssh to each node and place a secret onto the tmpfs again.

Whether you need additional logic to prevent pods from mounting specific host paths depends on your threat model. If you don't trust the users that create pods very much, then I agree you need the additional logic. If you mostly trust your insiders, and just want to isolate containers from each other, and have audit requirements around your secrets, then I think what I said is useful.

@smarterclayton smarterclayton self-assigned this Jan 21, 2017
@smarterclayton
Copy link
Contributor

Assigning myself to shepherd this through.

k8s-github-robot pushed a commit that referenced this issue Feb 8, 2017
Automatic merge from submit-queue (batch tested with PRs 41061, 40888, 40664, 41020, 41085)

Allow values to be wrapped prior to serialization in etcd

This adds a new value transformer to the etcd2 store that can transform
the value from etcd on read and write. This will allow the store to
implement encryption at rest or otherwise transform the value prior to
persistence.

* [x] etcd3 store
* [x] example of transformation
* [x] partial error handling

This is in support of #12742
@soltysh
Copy link
Contributor

soltysh commented Mar 16, 2017

/cc

@pnovotnak
Copy link

pnovotnak commented Apr 14, 2017

What would the damage be if someone got into a non-privileged container in a 1.6 GKE cluster? Could they get into etcd or into the secrets API right off the bat? Or would they need more, such as compromising a privileged container, etc.

@pawelprazak
Copy link

pawelprazak commented Apr 15, 2017 via email

@smarterclayton
Copy link
Contributor

smarterclayton commented Apr 15, 2017 via email

@pnovotnak
Copy link

pnovotnak commented Apr 15, 2017 via email

@smarterclayton
Copy link
Contributor

smarterclayton commented Apr 15, 2017 via email

@destijl
Copy link
Member

destijl commented May 12, 2017

Anyone here who has a use case for encryption of secrets at the database layer: please check the latest proposal:
kubernetes/community#607

Particularly if you have hard requirements that aren't satisfied by option 1, we'd like to know now.

@stevesloka
Copy link
Contributor

Hey @desttijl, my only requirement which is how I got started with this feature was to encrypt data at rest to better meet my HIPAA requirements of encryption at rest.

I think option one is good for this to get rolling. My initial idea was to treat like a cloud provider where you could implement your own, but might be too difficult to implement safely.

We use vault for some secret management so the vault transit API seemed interesting, but I'm cool with this right now.

Need any help coding pieces? I'm happy to help out if appropriate!

k8s-github-robot pushed a commit that referenced this issue May 17, 2017
Automatic merge from submit-queue (batch tested with PRs 45709, 41939)

Add an AEAD encrypting transformer for storing secrets encrypted at rest

Tweak the ValueTransformer interface slightly to support additional
context information (to allow authenticated data to be generated by the
store and passed to the transformer). Add a prefix transformer that
looks for known matching prefixes and uses them. Add an AES GCM
transformer that performs AEAD on the values coming in and out of the
store.

Implementation of https://docs.google.com/document/d/1lFhPLlvkCo3XFC2xFDPSn0jAGpqKcCCZaNsBAv8zFdE/edit# and #12742
@djschny
Copy link

djschny commented Jun 16, 2017

Excuse my ignorance here, but for encryption at rest, why not setup dm-crypt on for the filesystem on the host supporting etcd?

@destijl
Copy link
Member

destijl commented Jul 5, 2017

@djschny you can, and having an encrypted disk is expected. The advantages of doing this on top are discussed in the first section of the proposal.

@kksriram
Copy link

kksriram commented Jul 10, 2017

There have been several comments on using Vault to manage the secrets in a cluster. Rather than do that, we've been looking at using Vault to simply manage the encryption key for a cluster, like in this comment.

@destijl , we have a proposal that builds on @smarterclayton implementations in 1.7.

I'm new to this, so do I create a new issue to track managing the encryption key for a K8S cluster, via Vault? I looked at the feature tracking guidance and this seems to qualify.

@destijl
Copy link
Member

destijl commented Jul 13, 2017

@kksriram @sakshamsharma has a implementation for a KEK/DEK scheme that uses Google KMS to store the key. You should talk and align with that implementation:

#48574

@kksriram
Copy link

We've got a proposal to use envelope encryption and have Vault encrypt the DEKs.

I will push this out to sig-auth as well.

This builds on the work in 41939, 46460 and will likely end up being a new KMS provider based on 48574 that uses Vault.

@sakshamsharma
Copy link
Contributor

sakshamsharma commented Jul 21, 2017 via email

@kksriram
Copy link

@sakshamsharma, The Vault Provider would be another implementation of the same interfaces that you've got in that PR. The impl of the proposal I posted would go into Vendors as a Vault Provider analogous to the one you're implementing for GCEKMS.

@liggitt
Copy link
Member

liggitt commented Sep 23, 2017

closing in favor of kubernetes/enhancements#92

@liggitt liggitt closed this as completed Sep 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/secret-api area/security priority/backlog Higher priority than priority/awaiting-more-evidence. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects
None yet
Development

No branches or pull requests