Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container Storage Interface (CSI) plugin #7365

Closed
jweissig opened this issue Aug 26, 2019 · 13 comments
Closed

Container Storage Interface (CSI) plugin #7365

jweissig opened this issue Aug 26, 2019 · 13 comments
Assignees
Labels

Comments

@jweissig
Copy link
Contributor

The Container Storage Interface (CSI) plugin could expose secrets on a volume within a pod. This will enable the injection of secrets into a running pod using a CSI plugin.

We are seeking community feedback on this thread.

@james-atwill-hs
Copy link

Currently we use ServiceAccounts to authenticate to Vault to fetch secrets. The creation of ServiceAccounts and their assignment into specific pods is strictly controlled and monitored.

Can you elaborate how a pod would authenticate to the plugin and describe what it needs?

@oliviabarrick
Copy link

Is there a way to support environment variables in this model?

@tamalsaha
Copy link

Environment variables can't be supported with CSI driver, you can only inject secrets as a file. Then your entrypoint script can turn that into env variables.

You can find Vault csi driver here: https://github.com/kubevault/csi-driver

@tam7t
Copy link
Contributor

tam7t commented Aug 28, 2019

Some aspects to consider are:

  1. The authentication model - I believe the CSI driver design has a single deployment/statefulset handle volumes for all namespaces. I am not clear how difficult it would be to limit access on a pod-identity basis.
  2. Whether the volumes should support writes (or multiple readers/multiple writers).
  3. Portability (see Ensure "Pod Portability" while using secrets-store-csi-driver kubernetes-sigs/secrets-store-csi-driver#42) but also whether it is possible to maintain pod-portability whether the process is accessing a kv secret's latest value, a kv secret pinned at a specific version, or a dynamic secret. It may make sense for the CSI driver to support multiple StorageClass configurations where the StorageClass determines how to contact vault and how to resolve versioning - this way the PVC and Pod definitions can be portable across clusters/secret types.

@ritazh
Copy link

ritazh commented Aug 28, 2019

@james-atwill-hs

Currently we use ServiceAccounts to authenticate to Vault to fetch secrets.

For the secrets-store-csi-driver, there is a discussion to pass the ServiceAccounts of the pod to the csi driver:
kubernetes-sigs/secrets-store-csi-driver#23 (comment) Feel free to chime in if you have any feedback/concerns.

cc @seh @anubhavmishra

@tam7t
Thanks for the feedback. Here are few comments regarding the secrets-store-csi-driver. Would welcome any additional feedback you have.

  1. What does pod identity mean here for Vault? Currently the azure provider already works with the aad pod identity solution to only allow access to keyvault for pods that have specific identities.
  2. Currently, the volumes support multiple reads
  3. Portability issue Ensure "Pod Portability" while using secrets-store-csi-driver kubernetes-sigs/secrets-store-csi-driver#42 is currently being implemented/reviewed via Add secretproviderclasses crd kubernetes-sigs/secrets-store-csi-driver#58. For each csi-driver volume, users can pass in provider-specific parameters via kind: SecretProviderClass to ensure pod portability.

@seh
Copy link

seh commented Aug 28, 2019

If the cluster administrator grants a CSI driver permission to read pods, service accounts, and secrets across the cluster, then a CSI driver can inspect the pod to figure out which service account it uses, read that service account to get its default secret name, and read that secret to get the service account token with which to authenticate to Vault.

Using a reflector or informer is tempting here for the cache, but if we have to run these in a daemon on every machine in the cluster, we don't want them caching all pods in memory. Perhaps we could just cache the service accounts and secrets, and read pods on demand as necessary to learn of the associated service account.

For our multitenant clusters, since it's the pod author who is requesting reading of these Vault secrets, it must be the pod's service account—or, less ideally, an assumed IAM role conferred by something like kiam—with which we authenticate with Vault to read these secrets. The CSI driver itself should not be able to authenticate with Vault to read secrets that it would hand to these pods, or at least it shouldn't try to do so.

@vj396 is also interested in this topic.

@james-atwill-hs
Copy link

Is there a way to support environment variables in this model?

Also worth mentioning that environment variables are immutable; so passing in temporary credentials (AWS STS creds, for example) won't work.

@tam7t
Copy link
Contributor

tam7t commented Aug 28, 2019

@ritazh

  1. What does pod identity mean here for Vault? Currently the azure provider already works with the aad pod identity solution to only allow access to keyvault for pods that have specific identities.

All secrets are fetched by the node process so that process is somewhat privileged. I think it's worth documenting the access paths to ensure that desired security properties are maintained.

  1. Portability issue deislabs/secrets-store-csi-driver#42 is currently being implemented/reviewed via deislabs/secrets-store-csi-driver#58. For each csi-driver volume, users can pass in provider-specific parameters via kind: SecretProviderClass to ensure pod portability.

I'll take a look at the PR! I'd just like to highlight here that rotation properties of different vault backends may be an additional parameter worth including in something like SecretProviderClass. There is currently some pain in K8S rotating secrets where things like secret changes are visible immediately on the filesystem, but users who have written applications to read the secrets are startup (or in ENV vars) would really like to tie secret changes to rolling updates.

It would be nice if the same Pod manifest worked whether the referenced secret was a static KV, dynamic, and provide flexible rollout strategies of secret changes.

@mayakacz
Copy link

mayakacz commented Sep 4, 2019

cc @immutableT

@AndreasLudviksen
Copy link

AndreasLudviksen commented Mar 3, 2020

What is the status of this ?
And what is the relation between this initiative and the KubeVault CSI Plugin?

In order to support rolling readonly keys in linux pods with Hashicorp Vault - what is your recommended approach ? We want to mount secrets to files on the pods filesystem, not etcd or environment-variables. We run on AKS, and we will not use Consul.

@malnick
Copy link
Contributor

malnick commented Mar 6, 2020

@AndreasLudviksen - Thanks for reaching out. If CSI is a preferred route, please take a look at the project being spearheaded by @ritazh and our provider for it: https://github.com/kubernetes-sigs/secrets-store-csi-driver

Our recommended approach for your use case is our Vault agent side-car injector: https://www.vaultproject.io/docs/platform/k8s/injector

The injector can work with a Vault running inside your K8s cluster, or externally. It will inject secrets into your pod into a in-memory volume, mimicking the same UX native K8s secrets have. Take a look at the docs and let me know if you have any questions.

@tamalsaha
Copy link

@AndreasLudviksen - KubeVault project is a community project maintained by https://appscode.com and it provides an end to end management experience for vault using CRDs. You can find some details here:

@tomhjp
Copy link
Contributor

tomhjp commented Feb 23, 2021

I think the original question in this issue is answered. Any further questions, feel free to open an issue in https://github.com/hashicorp/secrets-store-csi-driver-provider-vault.

@tomhjp tomhjp closed this as completed Feb 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests