Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: Is it possible to get Kubernetes keep its Secrets in HashiCorp Vault #10439

Closed
akamalov opened this issue Jun 27, 2015 · 100 comments
Closed
Labels
area/secret-api kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/auth Categorizes an issue or PR as relevant to SIG Auth.

Comments

@akamalov
Copy link

Is it possible to get Kubernetes keep its Secrets in HashiCorp Vault ?

hashicorp/vault#377

Alex

@thockin
Copy link
Member

thockin commented Jun 27, 2015

This was discussed very briefly as something to explore after 1.0

On Fri, Jun 26, 2015 at 5:12 PM, akamalov notifications@github.com wrote:

Is it possible to get Kubernetes keep its Secrets in HashiCorp Vault ?

hashicorp/vault#377 hashicorp/vault#377

Alex


Reply to this email directly or view it on GitHub
#10439.

@akamalov
Copy link
Author

Thanks Tim!

@roberthbailey roberthbailey added this to the v1.0-post milestone Jun 27, 2015
@roberthbailey roberthbailey added priority/backlog Higher priority than priority/awaiting-more-evidence. team/cluster labels Jun 27, 2015
@benmccann
Copy link

+1 I think that be a great potential fit to explore further. Vault already has support for etcd as a backing store and the encryption, audit logging, secrets rotation, etc. would be great enhancements to k8s existing capabilities.

@F21
Copy link

F21 commented Jul 23, 2015

+1 for this. Vault has the ability to dynamically generate secrets against AWS, MySQL and other third parties which is a feature we are very interested in.

@pmorie
Copy link
Member

pmorie commented Jul 23, 2015

@akamalov Just want to clarify that what you want is for Vault to be the backing storage for the Secret api resource. If so, that is definitely something that should be possible to do from a pluggability standpoint whether or not it is the default. As for vault specifically, it looks very interesting and like there would be some synergy with things we want to do with secrets.

@erictune
Copy link
Member

Vault API docs: https://github.com/hashicorp/vault/blob/master/api/SPEC.md

One possible integration would be this:

  • vault is expected to be started and unsealed before apiserver is started. vault presumably runs on own machine not in k8s cluster.
  • apiserver is given credentials to authenticate to Vault. ACLs give it broad access.
  • Replace the REST implemenation of the k8s secrets API to use Vault as its storage instead of etcd. So, GET /api/v1/secrets/foo turns into Read or Write calls against the Vault Secret API group.
  • SecretType or annotations and name are used to map from k8s secret to a Vault API path.
  • Admins do audit, unsealing, setting policy, revoking, and mounting backends via direct access to the Vault API.

Does this seem reasonable?

@derekwaynecarr
Copy link
Member

Seems like a variation on the classic problem of:

"how can I store content managed by the api-server in "?

I think the spectre of this type of problem was first raised here:
#1957

Unique spin in this case is it scoped to a single resource and not across the board.

Do we really want to go down the path of making it a one-off replacement per resource type, or should we instead find a way to support pluggable api types, and treat our core objects as swappable first? I think that is a more preferred path to meeting the need, but its slightly more complicated than what @erictune denotes.

In general, I imagine each of the alternative pet stores whether for all resources or just one of them would have their own unique configuration flags that would need to be made available to the api-server and things can easily get out of control with flag proliferation.

@benmccann
Copy link

I think there are two questions here. One is can we make this piece pluggable. But the other is "do we need to build our own version of infrastructure piece x"? E.g. we're not building our own load balancers. And perhaps it could be helpful not to have to invent yet one more secrets store and create our own encryption, audit logging, secrets rotation, etc. when a solution already exists that fits well with k8s (i.e. is written in Go, distributed, runs on etcd, etc.)

@derekwaynecarr
Copy link
Member

@benmccann - well, a service is a load balancer for pods, but I understand your general point ;-)

The point is closer to our need for DNS and usage of SkyDNS. At this time, I am +1 for requiring DNS in Kubernetes and integrating SkyDNS (versus making it an add-on).

At this time, you cannot really use Kubernetes without Secrets, and I am not sure I want to require Vault to use Kubernetes yet. Others may disagree, and maybe the answer will become clearer when the requirements on Secrets grow over time.

That said, I am +1 on someone trying to swap an implementation of one our API objects with another implementation as a learning exercise on the patterns required. My quick review of the Vault API appeared to show there was no equivalent of WATCH, but maybe I am missing something.

@erictune
Copy link
Member

A second approach is to change each component that reads or writes secrets so that it knows how to use either a "builtin secret" or a "Vault secret". The changes that I can think of are:

  • kubelet knows how to pull either a "builtin secret" or a "Vault secret"
    • implies build dep on Vault API library
    • implies kubelet has separate flag to tell it where the Vault API is and a flag to tell it where it's credentials for Vault are.
  • same as above, for serviceAccount admission controller code, and flags for apiserver.
  • probably same thing for serviceAccount and Token controller, and flags for controller-manager.
  • PodSpec.imagePullSecrets can refer to a "Vault secret".
    • implies ObjectReference becomes something more general, able to reference external resources.
  • ServiceAccount.secrets and ServiceAccount.imagePullSecrets can refer to Vault secret
    • same thing about ObjectReference

Certainly possible. Need to compare effort to do above to effort to add the most wanted features to builtin secrets.

@akamalov
Copy link
Author

@pmorie Yes, indeed the ask (if possible) was to have Vault to be the backing storage for the Secret api resource.

@bgrant0607 bgrant0607 removed this from the v1.0-post milestone Jul 24, 2015
@akamalov
Copy link
Author

  • implies kubelet has separate flag to tell it where the Vault API is
    and a flag to tell it where it's credentials for Vault are.

Based on how k8 api server configured, couldn't it proxy it through on
behalf of kubelet to Vault API server ?

On Thu, Jul 23, 2015 at 8:20 PM, Eric Tune notifications@github.com wrote:

A second approach is to change each component that reads or writes secrets
so that it knows how to use either a "builtin secret" or a "Vault secret".
The changes that I can think of are:

  • kubelet knows how to pull either a "builtin secret" or a "Vault
    secret"
    • implies build dep on Vault API library
    • implies kubelet has separate flag to tell it where the Vault API
      is and a flag to tell it where it's credentials for Vault are.
      • same as above, for serviceAccount admission controller code, and
        flags for apiserver.
  • probably same thing for serviceAccount and Token controller, and
    flags for controller-manager.
  • PodSpec.imagePullSecrets can refer to a "Vault secret".
    • implies ObjectReference becomes something more general, able to
      reference external resources.
      • ServiceAccount.secrets and ServiceAccount.imagePullSecrets can
        refer to Vault secret
    • same thing about ObjectReference

Certainly possible. Need to compare effort to do above to effort to add
the most wanted features to builtin secrets.


Reply to this email directly or view it on GitHub
#10439 (comment)
.

@jefferai
Copy link

@erictune FYI, I'm not sure that those API docs are up-to-date, but there is exhaustive documentation at https://vaultproject.io/docs/http/index.html (plus info on the various backends). I'm not a Hashi guy but I have done some contributing to Vault and may be able to answer some questions (or point you to the right Hashi people).

I saw this ticket because someone mentioned it in #1957 (Support for Consul K/V storage). From that ticket it seems like work has gone into abstracting the storage backend, such that Consul (or ZooKeeper, or maybe eventually CockroachDB) could be swapped in for etcd.

To that end, regarding the mentions of SkyDNS above, please take care to keep things abstract. Consul already does service discovery (both DNS-based and with an HTTP resource). It would be a shame to have someone put together an alternate backing K/V storage implementation but then still be required to run etcd because SkyDNS is now a requirement.

@erictune
Copy link
Member

Thanks for the docs pointer. I'll take a look at that.

Regarding Consul and SkyDNS: in the project-supported Kubernetes distros,
SkyDNS uses a separate etcd instance which runs in its pod. That etcd
instance does not talk to the one used by apiserver. So, you could replace
apiservers use of etcd with consul, without breaking DNS.

On Fri, Jul 24, 2015 at 8:27 AM, Jeff Mitchell notifications@github.com
wrote:

@erictune https://github.com/erictune FYI, I'm not sure that those API
docs are up-to-date, but there is exhaustive documentation at
https://vaultproject.io/docs/http/index.html (plus info on the various
backends). I'm not a Hashi guy but I have done some contributing to Vault
and may be able to answer some questions (or point you to the right Hashi
people).

I saw this ticket because someone mentioned it in #1957
#1957 (Support
for Consul K/V storage). From that ticket it seems like work has gone into
abstracting the storage backend, such that Consul (or ZooKeeper, or maybe
eventually CockroachDB) could be swapped in for etcd.

To that end, regarding the mentions of SkyDNS above, please take care to
keep things abstract. Consul already does service discovery (both DNS-based
and with an HTTP resource). It would be a shame to have someone put
together an alternate backing K/V storage implementation but then still be
required to run etcd because SkyDNS is now a requirement.


Reply to this email directly or view it on GitHub
#10439 (comment)
.

@jefferai
Copy link

@erictune Ah. This also suggests, however, that etcd is running in a single-instance mode, which means no HA or reliability. In that case, you'd maybe get more mileage from spending the effort on making SkyDNS use e.g. BoltDB...something designed more around a single-user paradigm.

Edit: I realize you said "in its pod", so perhaps you are talking about a separate, fully multi-server HA etcd setup. Either way, I'd still prefer to re-use existing Consul rather than populate SkyDNS with services that Consul already knows about and is able to serve information for; not to mention the fact that Consul's secondary HTTP API provides a much richer set of information than the DNS API on either SkyDNS or Consul can provide.

@jonmoter
Copy link

+1 for this.

Given that secrets are currently stored unencrypted in etcd, I can't use kubernetes secrets for storing sensitive app information. I can role my own integration with Vault as a sidecar container, but it would be nice to leverage the built-in features that k8s offers, and have Vault be a swappable implementation detail.

@JeanMertz
Copy link

/subscribed. Really interested in this capability 👍

@jefferai
Copy link

@derekwaynecarr There's currently no notion of "watching" a value in Vault. However, there are two ways to work around it:

  1. Set a watch in the backing store. Vault's secrets in the generic backend are laid out in a predictable fashion in the backing KV store; they're just encrypted. You could simply set a watch in Consul/etcd/ZK and have the same functionality that you're used to from your KV store.

I'm not sure that watches that run arbitrary commands or perform arbitrary functions are worth replicating in Vault's API, although I'm not closed to the idea, but if it's too simple it might not be useful...so the right balance would have to be figured out.

In the mean time, I think this is actually a decent usage pattern, as you can then take advantage of any native watch functionality in your backing store, and simply convert the path to do the lookup against Vault.

  1. Use the configured lease TTL support. In the generic backend, TTLs in the generic backend do not actually remove data; they are a hint to clients as to how often to check for updated data. This won't be useful for all workflows, but it will be useful for some.

@erictune
Copy link
Member

@gurvindersingh you asked me about this issue at KubeCon

@erictune
Copy link
Member

The use case mentioned by @gurvindersingh was that companies have secrets that are stored in vault and they have some machines with kubernetes, and some that are non part of a kubernetes cluster. They do not want to store all secrets in kubernetes (reasonable), and they do not want to store secrets in two places (maybe reasonable).

@odigity
Copy link

odigity commented Nov 10, 2015

(Disclaimer: Just discovered Vault last week, but have now read all the docs and am thoroughly in favor of it.)

Now that Vault exists, it seems reasonable to expect all open source systems that need to store secrets to move in a direction that would allow Vault to serve that function. One purpose per tool and best tool for the job, right? Just like:

  • auth (LDAP, OAuth2)
  • storage (etcd, Consul, and a million others -- etcd already supported in Kube)
  • DNS (SkyDNS -- already supported in Kube)
  • container engine (Docker, Rkt)

And so on.

(I have no opinion re priority of this feature relative to others. Just saying it's a logical goal.)

@gtaylor
Copy link
Contributor

gtaylor commented Dec 28, 2016

Looks like that's just a design doc. I'm interpreting that as not being implemented yet.

@bbzg
Copy link

bbzg commented Jan 20, 2017

Google has released Google Cloud KMS on GCP, that would be an ideal place (for us) to store secrets, making them available for e.g. Ingress SSL certificates, secret volumes, environment variables, and so on.

I assume such integration with kubernetes is not already available?

@Bregor
Copy link

Bregor commented Jan 20, 2017

@bbzg good for you :)
But what about bare-metal users, or aws?

@bbzg
Copy link

bbzg commented Jan 20, 2017

@Bregor I've got love for everyone 🐶!

But of course, I would be most happy if this feature was brought to GCP/Google Cloud KMS as soon as possible :)

@pbarker
Copy link
Contributor

pbarker commented Jan 29, 2017

While I don't think k8s should be dependent on Vault, I do think it should create the types Key and KeyProvider. These types would provide a pluggable interface into a variety of backends: Vault (transit), kms, etc.

apiVersion: key.k8s.io/v1alpha1
kind: KeyProvider
metadata:
  name: vault-key-provider
type: vault       # kms, webhook, etc
url: https://myvault
root: /transit

Keys could then be referenced from a provider

apiVersion: key.k8s.io/v1alpha1
kind: Key
metadata:
  name: vault-key
  namespace: development
type: dynamic
providerRef: vault-key-provider

Keys can then be linked to secrets:

apiVersion: key.k8s.io/v1alpha1
kind: Secret
metadata:
  name: mysecret
type: Opaque
key:
  ref: vault-key
  use: always    # on-read, on-write, etc
data:
  username: YWRtaW4= 
  password: MWYyZDFlMmU2N2Rm

A user would both need permissions to the secret and the key used to decrypt it. The use parameter would designate what operations should appropriate it. Thoughts?

@krisnova
Copy link
Contributor

krisnova commented Jan 29, 2017

@grillz I think this is a good proposal.. For an initial design I think your use parameter does a good job at explaining the use case. But I imagine that will need to mature a little bit over time.. k8s is littered with caveats.

Regardless of the implementation of your suggestion, having this abstracted out into a manifest seems like the best option here.

@liggitt
Copy link
Member

liggitt commented Feb 2, 2017

I do think it should create the types Key and KeyProvider.

I don't think exposing keys via the API is a good place to start. Some of the biggest API-related issues with secrets today are:

  • difficult to target secrets to a particular purpose or audience (e.g. "kubelet can use my image pull secret, but no one else can")
    • Some purposes ("deliver this secret to a pod as an envvar") require other actors (like the kubelet) to have access to the secret's contents.
  • uniform access to a non-uniform generic resource ("list secrets" gives cross-cutting access to whatever people were using secrets for in a namespace)

A key resource would have the same issues. I think we should work on solving those issues for secrets first, then consider whether an API key resource makes sense as an additional layer.

@pbarker
Copy link
Contributor

pbarker commented Feb 2, 2017

Our biggest problem with them right now is that they aren't encrypted.

uniform access to a non-uniform generic resource ("list secrets" gives cross-cutting access to whatever people were using secrets for in a namespace)

Is more of a namespace issue in general, and I don't think its specifically secret related. Currently, we are segmenting "Apps" into namespaces to restrict access to the secrets, but even with that the secret that is stored simply isn't sufficiently encrypted and hence compliant. I see we have #12742, why not make that a pluggable key interface?

I was just trying to solve the MVP of making secrets encrypted, but I agree that the issues you mentioned are also pressing.

@liggitt
Copy link
Member

liggitt commented Feb 2, 2017

there are multiple levels:

@pbarker
Copy link
Contributor

pbarker commented Feb 2, 2017

my point in #10439 (comment) was that delivering the decrypting key via the API in a partitioned, secure way is essentially the same problem as delivering secrets via the API in a partitioned, secure way, so we should solve that first.

Not if the key is referenced externally via a plugin. This would be similar to the Authorization plugin implementation where a webhook can simply be used to link out. We have clients that would prefer to use their own keys similar to how they prefer an external user management system. This would eliminate the need for low level encryption all together as the client is responsible for their keys and the data encrypted by them.

@EricMountain-1A
Copy link
Contributor

Not if the key is referenced externally via a plugin. This would be similar to the Authorization plugin implementation where a webhook can simply be used to link out.

The webhook client will need a certificate and key to identify itself to the webhook server so it can legitimately request a "decrypting key". If the intent is for the pods to decrypt their secrets themselves, and avoid handling by kubelet/APIServer/... then the webhook client's (cert, key) pair needs to be delivered to the pod. Does that not bring us back to @liggitt 's point about partitioned and secure delivery?

@wstrange
Copy link
Contributor

wstrange commented Feb 9, 2017

A good survey of other secret solutions:

https://medium.com/on-docker/secrets-and-lie-abilities-the-state-of-modern-secret-management-2017-c82ec9136a3d#.k3yxv32o9

@bgrant0607
Copy link
Member

@emaildanwilson
Copy link
Contributor

This is an old thread but...

Now that they are secrets are encrypted by default on k8s would it be best to just integrate with the vault open service broker so that it's easy to provide vault auth secrets to services running on k8s?

See kubernetes-retired/service-catalog#1042 for basic info of how this works.

@nilebox
Copy link

nilebox commented Jul 18, 2017

@emaildanwilson Secrets roadmap has a plan to implement Vault support in 1.8 (next release). Not sure if it is actually planned to be released in 1.8, but Kubernetes core certainly won't add a dependency on Service Catalog

@emaildanwilson
Copy link
Contributor

@nilebox thank you for pointing that out! It seems like it will be a nice option.

I agree that core shouldn't consider using the service catalog approach.

This option is more targeted to cluster admins that want to integrate with their existing Vault setup and provide to their end users a declarative way to obtain Vault auth tokens. The end users (or services) could then use the credentials to read/write data directly to Vault.

@kksriram
Copy link

We've got a proposal to use envelope encryption and have Vault encrypt the DEKs.

I will push this out to sig-auth as well.

This builds on the work in 41939, 46460 and will likely end up being a new KMS provider based on 48574 that uses Vault.

@john-tipper
Copy link

+1 for storing secrets in Vault.

I'm in a very large enterprise (& legally regulated) environment. I don't want to store some secrets in K8s and some elsewhere and I want to be able to rotate secrets easily. I'd like for service accounts to be authenticated in the same way that user accounts are.

@sunshinekitty
Copy link

Hey all, until this is added to Kubernetes I've created a sync as a stop-gap. https://github.com/sunshinekitty/vaultingkube

@destijl
Copy link
Member

destijl commented Dec 6, 2017

There is now an integration with Vault that lets you authenticate to it with K8s service accounts. Particularly pay attention to the security model section, it does give Vault some power over your cluster:
https://github.com/hashicorp/vault-plugin-auth-kubernetes

That's something we're intending to fix with this:
kubernetes/community#1460

@liggitt liggitt added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 6, 2018
@mfilotto
Copy link

@kksiram as the issue #49817 is now solved, does it mean we can now use Vault as a secret backend for Kubernetes ?

@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Feb 26, 2018
@bgrant0607
Copy link
Member

@mfilotto I assume the feature will be alpha in 1.10, but please watch kubernetes/enhancements#460 and the release notes for 1.10 and subsequent releases.

Closing this as a dupe of #49817

@destijl
Copy link
Member

destijl commented Feb 27, 2018

There's some confusion on this point, let me try and clear it up. #49817 is about storing the keys to encrypt secrets in vault. It isn't a general vault integration. The remaining Vault integration work I see in this space is:

  1. Vault to update it's auth backend to use the scoped JWTs that went into 1.10 with the tokenreview API
  2. Build a standard init container to handle delivering the secret from Vault to the workload using the token from 1). This solves for non-Vault-aware workloads.
  3. A way for Vault to tell external parties "this secret just changed" via pubsub or similar.
  4. A K8s controller that subscribes to the vault pubsub for Vault secrets in use inside k8s and restarts the relevant containers when secrets change.

I'll chat to our hashicorp contacts about this, 3 and 4 are just ideas at this point. This content will go into the updated version of "A Plan for Secrets" I want to get out in the next few weeks:
https://docs.google.com/document/d/1JAwPuZg47UhfRVlof-lMw08OJztunW8pvTNxDK3rCF8/edit#heading=h.vw9xyk1ib8nn

Look for that in sig-auth by the end of Q1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/secret-api kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/auth Categorizes an issue or PR as relevant to SIG Auth.
Projects
None yet
Development

No branches or pull requests