New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PersistentVolumeClaim is pending when using Shared File System with csi #4012
Comments
Min ceph version requited is 14.2.2 https://github.com/ceph/ceph-csi#support-matrix |
I use ceph/ceph:v14.2.4-20190917 as https://github.com/rook/rook/blob/v1.1.1/cluster/examples/kubernetes/ceph/cluster.yaml
|
Look like cephfs issue, @ajarr PTAL |
After I reset CephCluster ,errors are change.
|
Can you restart cephfs provision pods and try? |
I tried ,it goes back to the previous error |
@dingsongjie, what are your Ceph filesystem's data pools? Check with
You can issue the above command from the ceph toolbox pod/container If the is not a CephFS data pool, then you hit the error you see |
for your reference, I've successfully done it. ceph 14.2.4 I used the yamls as attached. And I encountered another error as below and fixed that by deleting and recreating all of csi-cephfsplugin-provisioner : |
@zhangxpp so you're able to create CephFS PVCs and you're not facing any issues? |
Yes. correct. @ajarr |
Restart the cephfs provisioner should solve your problem. |
Having the same issue. CentOS 7, Linux Kernel 5.3.8-1, K8s 1.14.8, Rook 1.1.7 - official chart. Ceph 14.2.4-20191112 InvalidValue: [Errno 22] error in setxattr |
it looked like this for me too: after checking
as suggested by @ajarr it turned out the pool name in the StorageClass was wrong, after adjusting that and restarting the provisioner it worked for me |
I have the same values of pool:
(I don't know why there is extra space inside of data pools array)
And still get this errors |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
I ran into similar problem:
Fixed by createing csi subvolume group manually via toolbox:
|
|
run into this issue with rook v.16.0 |
Ran into this issue with rook v1.7.3. Fix for me, as mentioned above by @tomassrnka, was to manually create the subvolumegroup using: kubectl exec -n rook-ceph -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- bash
$ ceph fs subvolumegroup create myfs csi |
Also ran into this issue with rook v1.7.6. Fixed by restarting cephfs-csi-provisioner pods. |
rook v1.8.9, ceph v15.2.13 - still observing this issue: why is it closed? |
There are a number of reasons the PVCs can be in pending state, most commonly when the CSI driver is not properly running, or Ceph is not healthy or configured properly. See the CSI troubleshooting guide. |
version 1.1.1
pvc logs
The text was updated successfully, but these errors were encountered: