-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OpenShift] CrashLoopBackOff when starting DS vault-csi-provider #113
Comments
I'm not too familiar with OpenShift, but this seems like another permissions issue. The CSI driver and provider communicate over a unix socket within a |
Hi @tomhjp this was my assumption too. I tried solving it by assigning SCCs to run the CSI pod. Neither:
|
Hello, I ran into this issue today. |
@devops-42, @tomhjp: it seems that there is something missing in csi provider manifest for OpenShift. After patching the daemonset using this line, it fixed the issue: Regards, |
Thanks for the help debugging this. We recently added some documentation to our website to help explain the steps required to install the Vault CSI Provider on OpenShift: https://www.vaultproject.io/docs/platform/k8s/csi/installation#installation-on-openshift However, just be aware that it requires going against OpenShift's recommendations due to the privileged pods and writeable hostPath volumes required, so we have a notice in that documentation with the same warning. And for the same reason we don't currently plan to add official support in the helm chart. However, there is no such issue with the Agent injector, so hopefully that covers most of the requirements on OpenShift. |
Hi,
I tried to install Vault with CSI provider enabled on OpenShift using Helm and the following
values.yml
:After deployment I checked which pods are running in the namespace. Only the vault pod was shown, but no pod for the vault-csi-provider. When looking at the events some SCC issues have occured, so I added the
privileged
SCC to get the pod started:Now the pod could start, but immediately runs into an error state. The pod's log output is:
How could this be solved?
Thanks for your help in advance!
Cheers
Matthias
The text was updated successfully, but these errors were encountered: