Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Secrets Syncing #7364

Closed
jweissig opened this issue Aug 26, 2019 · 46 comments
Closed

Kubernetes Secrets Syncing #7364

jweissig opened this issue Aug 26, 2019 · 46 comments
Assignees
Labels

Comments

@jweissig
Copy link
Contributor

We are exploring the use case of integrating Vault with the Kubernetes Secrets mechanism via a syncer process. This syncer could be used to periodically sync a subset of Vault secrets with Kubernetes so that secrets are always up-to-date for users without directly interacting with Vault.

We are seeking community feedback on this thread.

@jweissig jweissig changed the title Syncer Process Kubernetes Secrets Syncing Aug 26, 2019
@mattrobenolt
Copy link

Hi, yes. Is this thinking to be two way sync? Or just syncing from Vault into Kube?

I was looking to write just this (seemingly) soonish. Just something primitive to sync one way from Vault into Kube. Our goal is to use Vault as the source of truth. But the convenience of using secrets with Kube natively make it really appealing.

So my ideal workflow is people interact with Vault explicitly for managing secrets, Kubernetes gets a read only view of a subset of keys.

What feedback are you looking for here? Happy to help.

@johanngyger
Copy link

johanngyger commented Aug 27, 2019

We did exactly that: we wrote our own syncer and blogged about it. Please have a look at https://github.com/postfinance/vault-kubernetes and https://itnext.io/effective-secrets-with-vault-and-kubernetes-9af5f5c04d06. It is a pragmatic approach, the blast radius is limited because secret paths can be specified, and secrets can be centrally managed in Vault.

Apart from that, @sethvargo says it is a terrible idea to sync from Vault to K8s secrets as they are inherently insecure. You need an extra KMS provider to encrypt K8s secrets at rest, for instance.

I envision a standardized interface which would be a great way to plug Vault into K8s. Maybe CSI ist the right one, maybe a higher-level secrets management interface (SMI?) would be better. In this scenario, Vault would take over the role of storing and providing K8s secrets without breaking existing APIs.

@mattrobenolt
Copy link

Objectively, that’s true. But I don’t think that’s a concern in all use cases. If we treat everything running in Kubernetes as trusted, it’s the same risk tolerance. We are more concerned about mutations of secrets than reads. I agree though and in our case, it’s only a subset of things anyways that fall under this bucket. Kube secrets can also be restricted with RBAC I believe. Granted, it’s different rules than Vault, but anything can at least be restricted on that level.

A CSI would be nice as well and could potentially make it work. Right now, a lot of our work relies on secrets being injected as environment variables, which is not something AFAIK can be done with a CSI? If it can be, that’d be great. If not, we’d need to read them from disk and populate into env vars at boot up time.

@jefferai
Copy link
Member

If we treat everything running in Kubernetes as trusted, it’s the same risk tolerance.

The question of course is: runtime, or at rest? The issue with syncing into Kube secrets is that it's (generally, unless you use a KMS provider) not encrypted at rest, so you've gone from storing secrets at rest in an encrypted location to an unencrypted one.

@mattrobenolt
Copy link

mattrobenolt commented Aug 27, 2019

In my case, and I have to believe this is the default, but syncing to filesystem in a Pod goes into tmpfs, so it isn't persisted at rest within a Pod unless you were to do something out of the ordinary and copy files over.

edit

Oh, are you referring to how Kubernetes stores it's secrets internally? I guess that's a good question. I had assumed Kubernetes encrypts the storage itself, but I never looked into it. 🤔

edit2

Looking at https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ you can configure them to be encrypted at application level. Our disks themselves are already encrypted at rest through Google, but this is effectively something we'll make weaker by using Kube secrets unless we configure the EncryptionConfig on top of that.

This is good information!

@james-atwill-hs
Copy link

Currently we use ServiceAccounts for services to authenticate to Vault. This (via templated policies), gives services access specific access to a number of paths inside Vault to manage whatever secrets they need (ssh, aws, and static secrets). Having a system running inside Kubernetes that has super-power access to Vault would be really suboptimal for us.

As it stands our services do not need to interact with Vault directly, instead they use an init container + sidecar to fetch and keep fresh various secrets. These containers are managed by tooling, and the secrets the services desire are described in a vault-config.yml file.

See: https://github.com/hootsuite/vault-ctrl-tool

@cypherfox
Copy link

I would second the route of a CSI for a number of reasons:

  • copying secrets to K8s will always increase the surface of attack massively . I do not trust K8s! The copying will create the need to protect the data at rest, in such a way that it hard to misconfigure and secure by default. Vault does this (as a raison d'etre!)
  • can provide a mechanism of component authentication that does not rely on K8s configuration / integrity, like SPIFEE. This is possible without the CSI, but everybody is reinventing the wheel.
  • Creates a more limited scope than all of K8s for a security audit.

For applications that are not Vault aware, a sidecar can provide the authentication to the CSI and populate secrets in the filesystem. IMHO, passing secrets in an environment variable is a dubious idea (kubectl describe pod?)

@sergiosalvatore
Copy link

Hi everyone -- you may be interested in Pentagon which was built specifically to sync vault data with k8s secrets. Full disclosure: my team and I wrote Pentagon.

@chrisfjones
Copy link

I love the idea of a vault->k8s syncer, please do it 👍
Even though some use cases would benefit more from a CSI plugin or init-container approach, there is still a great deal of value in the sync approach. For some use cases, the idea of placing Vault as a SPOF for fleet-wide pod startup is not great; so the availability tradeoff is worth it.
Furthermore, Kubernetes secrets can be made more secure via:

@antoniobeyah
Copy link

I agree with this- what I would like to see:

  • vault secrets are synced to namespaces with a named version
  • no runtime dependency on the application pods
  • no special requirements for k8s service accounts

Essentially, I want k8s dealing with everything related to pod runtime with no external platform dependencies. It is nice to have all of the "inputs" to a pod statically declared in the manifest without having to jump through a ton of hoops to figure out where items within a pod are coming from (it is even clearer if you are running with a readonly root file system).

Another bonus:
Since these are just secrets that are referenced in the pod manifest, I can sync the secrets to k8s external to the cluster itself, and complete isolate the vault instance from external access from the k8s environment.

How this can be used:

I'm not aligned with the CSI approach- it feels like it adds a ton of extra moving parts to pod runtime with questionable value depending on how your cluster is setup.

@daneharrigan
Copy link

@jingweno and I have been looking at scenarios where secrets will need to be shared across multiple kubernetes namespaces. In the case of two deployments, unique namespaces for each, and a shared secret (basic auth for example), vault would need to sync to a k8s secret in each namespace.

I’m on board with leveraging k8s secrets since it’s a simple interface and it allows the community to develop against now and add a vault syncer later down the road.

@tstoermer
Copy link

Currently, we are using Vault Agent and Consul Template in various scenarios as the standard tooling. Also in k8s as sidecar containers to provide the required secrets and performing renew of leases.

It would be great to have a managed injection of these as init containers (enable clean startup of applications) and then as sidecar (for managing renew of token & leases).
That would reuse the existing tooling and strengthen their maintenance.

@thefirstofthe300
Copy link

Funny, I am just hitting this problem for the first time and am trying to decide how best to solve this particular issue. Here's my usecase:

  1. We are running applications that have zero knowledge of vault but can read secrets in from env vars or from the filesystem with no issues.
  2. These applications cannot be guaranteed to not read credentials from disk each time an authenticated call is made to an API; consequently, updating the secret should result in a new deployment.
  3. The end user should be capable of providing a fully fleshed out configuration to a cluster administrator to audit and then apply to synchronize the requested secrets into the user's namespace.

My preferred solution (which I was looking to write over the next month or two) would be to create a controller with two CRDs to implement the syncing solution.

The first CRD would be cluster scoped and provide the configuration to access Vault.
The second CRD would be namespace scoped and used to tell the controller which secret to fetch from Vault and load into the cluster. It would also configure all of the secret specific rotation information.

At this point, the secrets would all be synchronized to the cluster, but they still would not trigger a new deployment rollout to ensure that the application had the correct set of credentials cached.

Consequently, I envision using an annotation to tell the application which secret to pull from the namespace. The secret would need to have a unique identifier that changes each time the secret is rotated. The controller would then update all deployments/statefulset/etc that reference the secret specified in the annotation with the new secret name, thus triggering a rollout.

Again, this is how I'd go about solving the problem. I am not sold on the idea of using a CSI plugin as it requires the application to be aware of changes to the file mounted into the pod (at least, that's my understanding of how the CSI works). I'm not going to say it's not something that should be investigated or even implemented; I'm just saying I don't foresee it solving my usecase.

I'm more a fan of the k8s syncer method for those of us who are running legacy applications that are fairly dumb and don't really pay attention to whether or not the secrets are changed after their initial boot.

@olemarkus
Copy link

olemarkus commented Sep 1, 2019

https://github.com/DaspawnW/vault-crd can already do this, but there are some key features missing. For example that anyone that can provision the Vault resource can access all secrets that Vault-CRD has access to. It would be nice if this was namespace-aware or that one can provision different CRD controllers with specific access to Vault. It also doesn't support dynamic secrets.

Allowing for multiple sync controllers that can be configured with different Vault permissions watching a specific set of namespaces would be important, I think.

As for Kubernetes secrets being unsafe. Many still use ServiceAccounts for Vault auth, which is stored as secrets. With other auth mechanisms you will still rely on some kind of secret for identity towards vault.

One can achieve encryption-at-rest for secrets by using etcd with TLS and encrypted storage.

@fcgravalos
Copy link

fcgravalos commented Sep 4, 2019

We implemented https://github.com/tuenti/secrets-manager at Tuenti. It is basically a custom controller which reconciles a CRD called SecretDefinition. A SecretDefinition would look like the example below:

apiVersion: secrets-manager.tuenti.io/v1alpha1
kind: SecretDefinition
metadata:
  name: secretdefinition-sample
spec:
  name: supersecretnew
  type: Opaque
  keysMap:
    foo:
      path: secret/data/pathtosecret1
      encoding: base64
      key: key1
    bar:
      path: secret/data/pathtosecret1
      key: key2

secrets-manager uses vault-approle authtentication to connect to Vault and currently supports kv1 and kv2 engines. I'm looking into the possibility of implementing other vault backends (dynamic secrets will be a killer feature) and match them as classes which is kind of the Ingress Controller approach.

With secrets-manager you can "compose" secrets, so a single Kubernetes secret could map to mutiple Vault path/keys, supporting base64 encoding (this is important since Kubernetes will store them in base64).

@james-atwill-hs
Copy link

Tools that have cluster-wide access to Vault secrets (like https://github.com/tuenti/secrets-manager) are a non-starter for us. Also, the further we go down the road of dynamic credentials (databases, aws, etc), it becomes more important that tooling only cycles to new credentials when the lease is about to expire - comparing dynamic credentials to the ones kept inside of Kubernetes will just cause new credentials to be constantly created which isn't really appropriate.

@fcgravalos
Copy link

fcgravalos commented Sep 4, 2019

Hi @james-atwill-hs!

As I said, secrets-manager uses AppRole, and we use policies to control which paths are allowed to be read by secrets-manager.

About dynamic secrets, It is not implemented, neither designed. But I think it's feasible to implement the logic to read when the secret ttl is close to expire, as we do with token renewal

@thefirstofthe300
Copy link

@fcgravalos I believe James's point is not that secrets-manager has to have permssions on Vault but that secrets-manager needs cluster-wide access to secrets to be able to install them in all the required namespaces.

I'm assuming that secrets-manager is able to install secrets in multiple namespaces. If it's not, I'd totally be willing to help get support for both namespacing and dynamic credential backends implemented. I've been wanting to dig into writing operators for a while now anyway, and I'm at a point where I need it.

I'd also suggest adding support for configuring secrets-manager using a namespaced CRD so that each namespace could have it's own isolated identity to allow for more granular control over secrets synchronization into namespaces.

@tstoermer
Copy link

Same requirement for us, each application must have its own vault access ensuring it can only access secrets it is allowed to. This is also very important for the vault auditing identifying secret usage of each application.

Regarding dynamic credentials, I think leasing is the important part, e.g. databases:
As many applications have problems with changing db users during runtime, we make use of the leasing mechanism. A db user gets dynamically created on pod startup and is valid as long as it leases are renewed. When the application is shutdown, the vault token is revoked and will clean up the db user. In that way we have

  • automated rotation of db credentials for each restart/deployment
  • auditing on usage of db users (including renew of lease in context of the application)
  • each pod has its own database user, if one gets leaked it can be easily cleaned up (delete pod to create new one with new secrets)

Of course, pki secrets will unfortunately not work that way.

@fcgravalos
Copy link

Hey @thefirstofthe300! Glad to see you are open to contribute!

We could check if there's a way a controller can reconcile objects in a given namespace but I am not sure if that would be possible tbh.

@dannyk81
Copy link

dannyk81 commented Sep 5, 2019

@thefirstofthe300 unfortunately CRDs are not namespace-local objects, an option to support namespace-local CRDs was discussed here kubernetes/kubernetes#65551 and although there was interest, it was not implemented (perhaps in the future 🙏)

So, although you could install multiple secrets-managers, the CRD objects are still going to be cluster-wide.

@tstoermer, the kind of requirement you are suggesting sounds like a sidecar approach, perhaps something like https://github.com/uswitch/vault-creds will work for you.

similar approach discussed here https://www.hashicorp.com/blog/whats-next-for-vault-and-kubernetes "Injecting Vault secrets into Pods via a sidecar"


secrets-manager was designed to reconcile secrets in a more centralized way, dynamic credentials is indeed interesting capability to add.

@james-atwill-hs
Copy link

I'd much rather something that injects init/sidecars into pods like ( https://github.com/uswitch/vault-creds / https://github.com/hootsuite/vault-ctrl-tool ) than one central controller that has more access than it needs; regardless of namespace scoping.

To be honest, I'd rather investigate a Secrets Provider and Secrets Provider Controller (ala Ingress / Ingress Controller); that's a huge undertaking though.

@olemarkus
Copy link

If you install multiple secrets-managers, you can specify per secrets-manager in which namespace to look for SecretDefinition objects. And this can be enforced through RBAC. RBAC would also control in which namespaces a controller could read/write Secret objects.

@dannyk81
Copy link

dannyk81 commented Sep 5, 2019

@james-atwill-hs a sidecar approach is appealing, together with an admission controller to inject the sidecar to the pods (much like Istio does with proxy sidecar), however you still need to define which secrets your applications (pods) need and where to store them/etc.. so this will need to be a logic in the admission controller I suppose.

To be honest, I'd rather investigate a Secrets Provider and Secrets Provider Controller (ala Ingress / Ingress Controller); that's a huge undertaking though.

Here, I'm confused 😄 since what you are describing is what secrets-manager does, the SecretDefinitions are equivalent to Ingress objects (and are namespaced, etc..) and secrets-manager is the controller that watches these objects and updates the secrets accordingly.

Ingress Controller has a concept of an ingress class and can also be limited to watch Ingresses in specific namespaces, combined with RBAC as @olemarkus describes can provide the same kind of isloation. But a controller is still a controller - it's scoped for more than just a single Pod.

@james-atwill-hs
Copy link

Here, I'm confused 😄 since what you are describing is what secrets-manager does,

Sort of; my understanding is that secrets-manager has enough access to Vault to provide secrets for multiple pods. If there are different Vault policies in place for service1 vs service2, secrets-manager would need to have access to the superset. This means having a second set of policies in place in secrets-manager to prevent service2 from requesting paths that are only supposed to be accessible to service1. It also means trusting secrets-manager with all the secrets for both service1 and service2.

Right now with init/sidecar, secrets from Vault come in straight from Vault over TLS and are only materialized within the cgroup of the pod. The pod authenticates as itself and the policies Vault gives it are managed inside Vault. ServiceAccount tokens are the defacto way for services to identify themselves in Kubernetes, and once we have bound serviceaccounts it becomes even less perilous to send a serviceaccount jwt to Vault too.

So, we use admission controllers and injecting containers because that's all we have. What I'm saying as a Secrets Provider / Secrets Provider Controller is something that would limit the exposure of secrets available in Vault to all the surrounding infrastructure.

Maybe it's something that visits pods and does the heavy lifting, maybe it's a Vault plugin that pushes encrypted Secrets into Kubernetes when a Kubernetes service authenticates and only that service has the decryption key. Dunno.

What it's not (for us), is another single point of risk that has a superset of access to Vault.

@fcgravalos
Copy link

fcgravalos commented Sep 5, 2019

Well, if you use a sidecar, there are two options:

  1. inject the secret directly from vault to the pod fs directly. Policies for the sidecars would be really hard to maintain, though

  2. creating a secret with k8s API. This does not fix the issue since even when a pod can get compromised and you can only get secrets from that path in Vault, you could probably read from k8s api other secrets in the same namespace. And in any case It is still a real PITA maintaining all those vault policies.

A vault plugin sounds appealing

@thefirstofthe300
Copy link

@thefirstofthe300 unfortunately CRDs are not namespace-local objects, an option to support namespace-local CRDs was discussed here kubernetes/kubernetes#65551 and although there was interest, it was not implemented (perhaps in the future)

Right, I may have phrased what I was saying poorly. I understand that the CRD itself can't be namespaced. It's an extension of the API server and consequently isn't easy to isolate the object definition to a namespace since that's not how the API server is designed.

I was referring to namespacing the vault configuration that the controller would use. This would allow the controller to pull secrets from Vault into a namespace using an identity that has an isolated set of permissions. Users who can create secrets configurations in the namespace would only be able to use the identity associated with the namespace they are located in.

Sort of; my understanding is that secrets-manager has enough access to Vault to provide secrets for multiple pods. If there are different Vault policies in place for service1 vs service2, secrets-manager would need to have access to the superset. This means having a second set of policies in place in secrets-manager to prevent service2 from requesting paths that are only supposed to be accessible to service1. It also means trusting secrets-manager with all the secrets for both service1 and service2.

@james-atwill-hs But services never actually ask secrets-manager for their secrests; they pull them from the secrets objects that secrets-manager creates.

Right now with init/sidecar, secrets from Vault come in straight from Vault over TLS and are only materialized within the cgroup of the pod. The pod authenticates as itself and the policies Vault gives it are managed inside Vault. ServiceAccount tokens are the defacto way for services to identify themselves in Kubernetes, and once we have bound serviceaccounts it becomes even less perilous to send a serviceaccount jwt to Vault too.

I would argue that this is no more secure than using software using an identity with a combination of all permissions granted to the service accounts inside that namespace. If I'm wrong about any of the below, feel free to correct me. There's a lot of pieces here and I could be very easily be missing something.

  1. If a person has the ability to get secrets in a namespace, they have access to service account tokens
  2. If a person has the ability to deploy a pod to a namespace, they have the ability to attach any service account to the pod and transitively have access to get any service account token in the namespace. However, they can't read all the secrets in the namespace.
  3. Given 1 and 2, anybody with the ability to deploy a pod into a namespace effectively has the accumulation of all permissions to Vault granted to the namespace's SAs.
  4. If someone has compromised a node or has exploited the API server, they effectively have full access to Vault since they can get the token of any service account.

If instead of granting access to Vault to each service account in the namespace, why not instead have a series of identities that a controller can use to synchronize Vault secrets to k8s secrets. Each identity will be specific to the superset of needed permissions for that namespace and only used when synchronizing the secrets for the secrets definitions in that namespace. This provides a good UX for friendly users who just want to be able to fetch the secrets that they already have access to anyway while keeping the effective attack surface essentially the same.

@fcgravalos
Copy link

fcgravalos commented Sep 7, 2019

@thefirstofthe300 regarding Vault configuration, I think this is exactly what we do when we deploy secrets-manager. We use a role whose policy rules allow a particular path in Vault.

We have 10+ clusters and every secrets-manager vault's permissions are scoped to its cluster path in Vault.

We can do the same with namespaces we just need to make sure that if múltiple secrets-manager are deployed per cluster we will need to find a way for them to watch these secretsDefinitions in its namespace.

Ingress controller seems to be able to do It:

https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/

There's a --watch-namespace flag for it

@daneharrigan
Copy link

I was thinking along the same lines as @Jogy with SMI (secret management interface). Could Kubernetes Secrets be rewritten with a pluggable store? It defaults to etcd, but optionally with Vault through the SMI.

@james-atwill-hs
Copy link

@fcgravalos writes:

Well, if you use a sidecar, there are two options: inject the secret directly from vault to the pod fs directly. Policies for the sidecars would be really hard to maintain, though

Containers running in a pod can share the same temporary filesystem. This is how the init container and sidecar interact with whatever service container you have. For vault-ctrl-tool, you can specify how you want your secrets outputted: everything from "just write out the vault token and i'll do the rest" to regularly writing ~/.aws/credentials and filling in templated configuration files. The risk is that the secrets are "on disk" in the pod, the win is that, as a developer, if you can read a file, you can be using Vault.

@thefirstofthe300 writes:

If a person has the ability to get secrets in a namespace, they have access to service account tokens

With RBAC turned on, no one person should have that much power.

If a person has the ability to deploy a pod to a namespace, they have the ability to attach any service account to the pod and transitively have access to get any service account token in the namespace. However, they can't read all the secrets in the namespace.

We believe that a Service Account is the way to identify a Service. So, this is where admission controllers come into place. We enforce a naming convention for service accounts and services so that service "A" cannot impersonate service "B" by including a service account token for "B".

If someone has compromised a node or has exploited the API server, they effectively have full access to Vault since they can get the token of any service account.

If someone gains access to the material used to authenticate to Vault, they gain the ability to authenticate to Vault and therefore also gain access to the secrets that service has. This is true of any authentication process. If you are using dynamic credentials (which is what you should be striving for), then once the breach is detected, you can quickly revoke any of the credentials and create new ones. If you're using static credentials then you have a lot of manual work as well.

And yes, if you gain access to a node and can become root, you may gain the ability to dump memory and sift through it to find secrets. Again, if your secrets are short lived then your exposure is limited.

@daneharrigan writes:

I was thinking along the same lines as @Jogy with SMI (secret management interface). Could Kubernetes Secrets be rewritten with a pluggable store? It defaults to etcd, but optionally with Vault through the SMI.

Agree something like this is where we should be going. Currently Vault supports using Service Account Tokens to authenticate, so the scope of policies is per-service account. I strongly think that's the right scope to be aiming for. Operators that have the access of multiple services (and are pulling a wide scope of secrets from Vault into Kubernetes) seem suboptimal to me.

@afirth
Copy link

afirth commented Sep 25, 2019

This proposal would be extremely helpful in the "cluster per service" model which is becoming common now that it's easy to provision clusters on the various cloud providers. Bootstrapping the secrets is currently the most painful part of that process, in my opinion. Of course, there are security tradeoffs for other use cases which might make this approach not viable for them. They can use the other mechanisms instead. Thanks for all the links to tooling that already does this. I think the fact that the tooling exists in several forms shows the vault team the use cases exist. 👍

@fcgravalos
Copy link

Just in case if it sounds interesting for someone, we added a watch-namespaces functionality into secrets-manager

tuenti/secrets-manager#39

@Phylu
Copy link
Contributor

Phylu commented Oct 31, 2019

We are currently use the following setup:

Pod:

Vault Kubernetes Authenticator gets a Vault token based on a Kubernetes Service Account. It writes the token kubernetes mount 1. Consul Template takes the token and connects to vault. It gets secrets such as database credentials and gcp service accounts. It writes those to kubernetes mount 2. The main container picks up the secrets and uses them.

One thing that we experience is troubles when some pods/pets are running longer than the vault defined max_ttl. Then console template cannot renew its token and there is no way to automatically kill the whole kubernetes pod.

So I am looking forward to an implementation that can re-authenticate using the kubernetes service account when the max_ttl is expired.

@asaintsever
Copy link

Talend Vault Sidecar Injector supports this use case (disclaimer: I am the author of this component).
We inject both vault agent+consul template as sidecars (not init containers) thus vault token is continuously refreshed for you underneath.

@james-atwill-hs
Copy link

Vault Agent Template Vault Agent now supports rendering templates containing Vault secrets to disk, similar to Consul Template [GH-7652]

Looks like Vault 1.3 will do away with needing consul template as vault agent will do it for you.

@asaintsever
Copy link

Oh great. Thanks for pointing that. Will then be able to only inject one sidecar to do all the work!

@fcgravalos
Copy link

secrets-manager v1.0.2 stable release. Nampesace restriction is included in this release.

https://github.com/tuenti/secrets-manager/releases/tag/v1.0.2

@blakepettersson
Copy link

How about the approach of a mutating webhook? It seems to allow for specifying the K8S secrets as Vault keys, while it defers the fetching of the actual secret until deployment time. I'm probably missing something here so feel free to let me know what that could be...

@joemiller
Copy link
Contributor

joemiller commented Dec 11, 2019 via email

@estahn
Copy link

estahn commented Dec 12, 2019

@blakepettersson @joehillen

How about the approach of a mutating webhook?

We had a fair amount of negative experience and drama with the mutating webhook. Just a reminder, the mutating webhook will mutate any created pod. While in theory, it's great to have every pod with its own credentials it wasn't scalable for us, because:

  1. The different secret backends need to be able to provide the throughput, e.g. AWS Aurora and AWS Redshift failed on this for us, e.g. replica lags due to blocking.
  2. You have to handle different failure scenarios like crash loops which will create massive amounts of users.
  3. If you use multiple secret backends per app (which I assume is normal) then 2. will increase exponentially.

Imo syncing to Kubernetes secrets sounds good (even if not most optimal) and using some strategy like https://github.com/stakater/Reloader should solve the pod rotation issue.

@joehillen
Copy link
Contributor

@estahn wrong joe

@jweissig
Copy link
Contributor Author

FYI - We just announce a new Vault + Kubernetes integration that enables applications with no native HashiCorp Vault logic built-in to leverage static and dynamic secrets sourced from Vault. Suspect this will be of interest to folks in here since there was chat about init, sidecar, service accounts, etc.

Blog: https://hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar
Docs: https://www.vaultproject.io/docs/platform/k8s/injector/index.html
Video: https://www.youtube.com/watch?v=xUuJhgDbUJQ

@bonifaido
Copy link
Contributor

bonifaido commented Jan 31, 2020

@estahn

Just a reminder, the mutating webhook will mutate any created pod.

This is just not true. MutatingWebhookConfiguration offers fine-grained control over which pods to mutate.

@estahn
Copy link

estahn commented Jan 31, 2020

@bonifaido

This is just not true.

I don’t recall the exact mechanism, but my primary point was about secret backends not being resistent to pod churn and especially situations with crash-looping pods which create thousands of users.

We have used bank-vaults and contributed the mutating webhook to the project. Unfortunately we had to remove it due to the impact on our workload.

@patpicos
Copy link
Contributor

I think both injection. and replication are needed capabilities. Not every secrets get loaded into a pod/deployment.

Here's another use case. Anthos CM for example has a CRD listening for RootSync objects. the configuration requires a references to the git credentials stored as a k8s secret. Therefore, the injector does not work.

Yes I could do an imperative step of

  • vault kv get
  • k creat secret
  • create the rootsync.
    ...but a direct integration w/o having to deploy yet another secret solution (SOPs, external secrets, your flavor of tool) would be ideal.

@ncabatoff
Copy link
Collaborator

We reserve github issues for bug reports and feature requests, which this doesn't appear to be. As such, I'm going to close this and suggest that you continue the discussion at https://discuss.hashicorp.com/c/vault/30.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests