Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support running Vault Agent cache in sidecar container #73

Closed
lawliet89 opened this issue Feb 7, 2020 · 2 comments · Fixed by #132
Closed

Support running Vault Agent cache in sidecar container #73

lawliet89 opened this issue Feb 7, 2020 · 2 comments · Fixed by #132
Labels
enhancement New feature or request injector Area: mutating webhook service

Comments

@lawliet89
Copy link
Contributor

https://www.vaultproject.io/docs/agent/caching/index.html

@tvoran tvoran added enhancement New feature or request injector Area: mutating webhook service labels Feb 8, 2020
@hamishforbes
Copy link

It would be helpful to have an annotation that enables a listener {} and cache {} block in the sidecar agent.

Something like

vault.hashicorp.com/agent-listen: bool (default: false)
vault.hashicorp.com/agent-listen-tls-disable: bool (default: true)
vault.hashicorp.com/agent-listen-address: string (default: 127.0.01:8200)
vault.hashicorp.com/agent-cache-auto-auth: bool (default: false)

I'm migrating a number of services onto k8s which previously were able to access Vault via an agent listening on plain HTTP bound to 127.0.0.1.

I can reproduce this in k8s but only by using the configmap method to add a listener, which is ok but very verbose and repetitive for many services

@lawliet89
Copy link
Contributor Author

lawliet89 commented May 6, 2020

#132 implements this. But I have kept TLS always disabled for now because there is no way to mount arbitrary volumes right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request injector Area: mutating webhook service
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants