Replies: 1 comment 2 replies
-
@BonzTM it should not be the case, we are trying to cover the static provisioning in our E2E cases to ensure we don't break the support and also to ensure it works , please feel free to open an issue if you face any problems. If you getting permission denied you might be having some core cephfs kernel issues. please free to look at the issues opened |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
With the deprecation of the CephFS storage driver in K8s 1.28, we're forced to use the CephFS CSI driver w/ static provisioning in order to access existing CephFS mounts/paths.
One thing that I've noticed over the last several months is that the CephFS CSI driver seems to be less reliable. Once a week, various apps in several of my clusters fail to access the CephFS paths defined. Solution is usually to either recycle the app or the node. I don't have any definitive data at this point whether one or the other is required. Is this something others are seeing?
Are there any specific tuning parameters I should be setting, or is this expected? I never experienced any connectivity loss with the old native storage driver.
Beta Was this translation helpful? Give feedback.
All reactions