New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encrypt secrets when in etcd. #12742
Comments
my first instinct would be to use etcd ACLs to subdivide the etcd instance, since indiscriminate write access to etcd would be just as much of a compromise (write yourself a privileged pod scheduled to every node that reads node credentials and asks the master API for all secrets... done) |
related issue is using https://github.com/hashicorp/vault to store secrets #10439 At first glance, changing our storage seems easier than integrating with something that has a non-Kuberntes API. |
that said, I also like the idea of encrypting secrets at rest in etcd (actually, I'd like the ability to do that generically on any object, since we also store things like OAuth access tokens in etcd) |
We might actually want to limit read and write on all apiserver storage, not just secrets. That would discourage people from depending on our storage implementation. |
+1 for preventing end-runs around API conversion/validation. think of the support calls :) |
Probably whether it makes sense to use something like Vault vs. simply encrypting data at rest in any Kubernetes storage solution depends on whether the extra things Vault does are useful (fine-grained access control, tokens with TTLs and limits on the number of uses, generation of short-lived access to external resources etc.) It's really designed for a wide set of use-cases that may simply not be needed by Kubernetes. There are other tickets for ZooKeeper/Consul support instead of etcd for backing K/V storage, so it probably makes sense for most secrets to simply encrypt what's being stored into the K/V store. That said you then still have the problem of the master keys -- where they should be stored and who has access -- and that is both a hard problem and a place where Vault could be useful, especially using the transit backend (https://vaultproject.io/docs/secrets/transit/index.html). It's going to gain support for key generation (so that the private key is never exposed but kept locked in Vault) and key rotation/rollover pretty soon, which will make it work quite well as a KEK service. Those that don't want to use Vault or a similar solution could simply make do with ACLs on a master key stored in the K/V store, or some similar approach. |
This is marked P1 but hasn't been touched in months,. @erictune can you find an owner or bump it to P2? |
I may have someone to work on this. I'm told @smarterclayton is working on code that (1) wraps all objects stored in etcd in a "wrapper" object, and (2) allows different "encodings" to be applied to the "inner" object stored in etcd, such as a protocol buffer binary wire format encoding, and (3) the outer or wrapper object can record what encoding(s) were applied to the inner object. @smarterclayton is the above correct, and can you point at an issue or PR and/or give an ETA for said feature? 1.3? Assuming that feature is available, the idea would be to have encryption be one form of encoding that can be applied to "inner" objects, and to apply it to all secret objects. I don't want to get into any specifics about keys and ciphers yet. Just confirming that the encoding support is going to line up. |
@pmoire if you know about serializers status |
cc @pmorie |
@wojtek-t wants the protobuf work to happen, too. 1.3 is a safe bet for this. |
@erictune - yes - you can assume that support for protobufs will be done for 1.3 |
One issue with the serializer approach. It would not allow the secrets to be stored in an HSM, which some customers have asked for. That would require a different storage backend, rather than a different serializer. I'm going to try to meet with some HSM users to understand the use case better. Let me know if you have experience and are interested. |
Here is an alternative that can be implemented by end-users now, rather than waiting for this feature to be implemented. Assume you have a special secret that you never want to hit the disk, including via etcd.
This is a little bit clunky, but it does allow people with strict rules around secrets to start using Kubernetes. Two use cases I am thinking of are when This pattern also might work in conjunction with https://github.com/UKHomeOffice/vault-sidekick |
@erictune: in the workaround, wouldn't this mean the nodes need to store on disk the superset of secrets needed by all pods that may be scheduled? also, preventing pods from mounting specific host paths, which may expose it secrets it's not supposed to have access to, would require additional logic. |
@rickypai In this scenario, the node only stores one, or a few secrets. Those secrets are used to get additional secrets from vault. Those secrets would not be stored on disk but on tmpfs. On node reboot, and admin needs to ssh to each node and place a secret onto the tmpfs again. Whether you need additional logic to prevent pods from mounting specific host paths depends on your threat model. If you don't trust the users that create pods very much, then I agree you need the additional logic. If you mostly trust your insiders, and just want to isolate containers from each other, and have audit requirements around your secrets, then I think what I said is useful. |
Assigning myself to shepherd this through. |
Automatic merge from submit-queue (batch tested with PRs 41061, 40888, 40664, 41020, 41085) Allow values to be wrapped prior to serialization in etcd This adds a new value transformer to the etcd2 store that can transform the value from etcd on read and write. This will allow the store to implement encryption at rest or otherwise transform the value prior to persistence. * [x] etcd3 store * [x] example of transformation * [x] partial error handling This is in support of #12742
/cc |
What would the damage be if someone got into a non-privileged container in a 1.6 GKE cluster? Could they get into etcd or into the secrets API right off the bat? Or would they need more, such as compromising a privileged container, etc. |
IFAIK, by default, it's game over, because you get access to
/var/run/secrets/kubernetes.io/serviceaccount and you can just create a
privileged container, take a look at this article:
https://medium.com/@CornflakeSavage/capturing-all-the-flags-in-bsidessf-ctf-by-pwning-our-infrastructure-3570b99b4dd0
also there is a problem of restricting access to AWS or GCE instance
metadata, and effectively to the IaaS API, we use kube2iam project with AWS
…On Fri, 14 Apr 2017, 21:45 Peter Novotnak, ***@***.***> wrote:
What would the damage be if someone got into a *non-privileged* container
in a 1.6 GKE cluster? Could they get into etcd or into the API right off
the bat? Or would they need more, such as compromising a privileged
container, etc.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#12742 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAC-6sl6_M10YgNFXyloaMNnkB-rruNEks5rv8zxgaJpZM4Fr4lK>
.
|
The plan for GKE is to disable the ABAC mode for service accounts, but
that is a breaking change for existing users. @cjcullen can point you
there if you need more info, not related to this issue.
|
So, if RBAC is enabled, the situation is clearly better, unless someone
gets into a pod with too much access. So, the game is keeping an attacker
out of pods with what roles? Instance metadata is a good point... Can an
attacker get everything they need from that?
Any other ways into etcd?
…On Apr 15, 2017 11:38, "Clayton Coleman" ***@***.***> wrote:
The plan for GKE is to disable the ABAC mode for service accounts, but
that is a breaking change for existing users. @cjcullen can point you
there if you need more info, not related to this issue.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#12742 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFctXf02ecWWL0lxQLHlbHRZ0gAYIyQWks5rwQ7AgaJpZM4Fr4lK>
.
|
Would be better to move this discussion to the threat model PR in
kubernetes/community
|
Anyone here who has a use case for encryption of secrets at the database layer: please check the latest proposal: Particularly if you have hard requirements that aren't satisfied by option 1, we'd like to know now. |
Hey @desttijl, my only requirement which is how I got started with this feature was to encrypt data at rest to better meet my HIPAA requirements of encryption at rest. I think option one is good for this to get rolling. My initial idea was to treat like a cloud provider where you could implement your own, but might be too difficult to implement safely. We use vault for some secret management so the vault transit API seemed interesting, but I'm cool with this right now. Need any help coding pieces? I'm happy to help out if appropriate! |
Automatic merge from submit-queue (batch tested with PRs 45709, 41939) Add an AEAD encrypting transformer for storing secrets encrypted at rest Tweak the ValueTransformer interface slightly to support additional context information (to allow authenticated data to be generated by the store and passed to the transformer). Add a prefix transformer that looks for known matching prefixes and uses them. Add an AES GCM transformer that performs AEAD on the values coming in and out of the store. Implementation of https://docs.google.com/document/d/1lFhPLlvkCo3XFC2xFDPSn0jAGpqKcCCZaNsBAv8zFdE/edit# and #12742
Excuse my ignorance here, but for encryption at rest, why not setup |
@djschny you can, and having an encrypted disk is expected. The advantages of doing this on top are discussed in the first section of the proposal. |
There have been several comments on using Vault to manage the secrets in a cluster. Rather than do that, we've been looking at using Vault to simply manage the encryption key for a cluster, like in this comment. @destijl , we have a proposal that builds on @smarterclayton implementations in 1.7. I'm new to this, so do I create a new issue to track managing the encryption key for a K8S cluster, via Vault? I looked at the feature tracking guidance and this seems to qualify. |
@kksriram @sakshamsharma has a implementation for a KEK/DEK scheme that uses Google KMS to store the key. You should talk and align with that implementation: |
Of note, #49350, which
implements this, is already up for review.
On 21 Jul 2017 10:23, "KK Sriramadhesikan" <notifications@github.com> wrote:
We've got a proposal
<https://docs.google.com/document/d/15-baW4i7qws1yxxIYjHXqKpk259ebauQbECQCpPD308/edit#heading=h.67bgmqyjswzf>
to use envelope encryption and have Vault encrypt the DEKs.
I will push this out to sig-auth as well.
This builds on the work in 41939
<#41939>, 46460
<#46460> and will likely end
up being a new KMS provider based on 48574
<#48574> that uses Vault.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#12742 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJ75pMeqeJ0MOriGXU98YNon1lmZIPzxks5sQN51gaJpZM4Fr4lK>
.
|
@sakshamsharma, The Vault Provider would be another implementation of the same interfaces that you've got in that PR. The impl of the proposal I posted would go into Vendors as a Vault Provider analogous to the one you're implementing for GCEKMS. |
closing in favor of kubernetes/enhancements#92 |
We should limit access to secrets when in etcd.
Initially discussed in #11937.
Although we discourage it, people seem to want to build their own clusters where etcd is used by both Apiserver and by other components, to store configuration. Secrets want to have Kubernetes access controls on them, and not to be widely readable. Other types of configuration wants to be widely readable.
We could do this in a couple of ways:
@liggitt @pmorie
thoughts?
The text was updated successfully, but these errors were encountered: