Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/var/lib/kubelet/pods is mounted on /var/lib/kubelet but it is not a shared mount #1017

Closed
pentago opened this issue Aug 4, 2022 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@pentago
Copy link

pentago commented Aug 4, 2022

Just installed the Helm chart on my local k3d cluster on a Mac and noticed this warning that keeps the secrets-store pod in CreateContainerError state.

Error: failed to generate container "69815de67a72a047842260d67a1d83fa61f6cf073a3cbf4b09fc1166926cec26 │
│ " spec: failed to generate spec: path "/var/lib/kubelet/pods" is mounted on "/var/lib/kubelet" but it is not a shared mount

Any suggestions on how to resolve this or is it possible at all on local clusters?
Thanks in advance!

@pentago pentago added the kind/bug Categorizes issue or PR as related to a bug. label Aug 4, 2022
@nilekhc
Copy link
Contributor

nilekhc commented Aug 4, 2022

@pentago Could you get the --root-dir kubelet is using on this cluster? You can go to any node and try ps aux | grep kubelet

Perhaps this is similar to Azure/secrets-store-csi-driver-provider-azure#101. Could you try this?

@pentago
Copy link
Author

pentago commented Aug 4, 2022

I'm not getting anything useful:

/ # ps aux|grep kubelet
  935 1000     /metrics-server --cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s
33652 0        /csi-node-driver-registrar --v=5 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock
33675 0        grep kubelet
/ #

I'm guessing the reason is local k3d/k3s cluster.
Anyone wth experience on those?

@aramase
Copy link
Member

aramase commented Aug 9, 2022

@pentago Have you tried asking at https://github.com/k3s-io/k3s?

@aramase
Copy link
Member

aramase commented Oct 28, 2022

Closing this due to inactivity. Please feel free to reopen if you have any questions.

/close

@k8s-ci-robot
Copy link
Contributor

@aramase: Closing this issue.

In response to this:

Closing this due to inactivity. Please feel free to reopen if you have any questions.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Dougniel
Copy link

@pentago Have you tried asking at https://github.com/k3s-io/k3s?

You were right : k3d-io/k3d#1063. A workaround is here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

5 participants