New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade V11.10 to V.12 - CephFS mounting issues #12843
Comments
@voarsh2 what is the cephcsi version in the cluster? |
I did not customise the cephcsi images in v11.10 and when trying to upgrade to v12. csi-provisioner:v3.4.0 cephcsi:v3.8.0 On the Rook Ceph Cluster Helm chart, I did add mount options for RBD and CephFS - discard mountoption, I am not sure if this might be causing issues with the upgrade? |
@voarsh2 see the breaking change here for cephfs PVC https://github.com/rook/rook/blob/release-1.12/Documentation/Upgrade/rook-upgrade.md#breaking-changes-in-v112 |
I'm seeing the exact same issue, it existed in version 1.11, and I've upgraded to 1.12 to see if it was fixed, but, still there.
CephFS:
StorageClass:
ceph status: ceph status services: data: io: bash-4.4$ ceph status
Versions:
myfs status:
|
@psavva you don't have the exact error (invalid) here, it looks to be some other problem. please check https://rook.io/docs/rook/latest/Troubleshooting/ceph-csi-common-issues/ can help you. |
@psavva what is the kernel version on the node ? |
Hi @Madhu-1 Thank you very much for getting back to me.
|
@psavva can you run the ceph mount command manually on the cephfsplugin container like |
Hi @Madhu-1 I've deployed the direct-mount pod and did the following: I create a cephFS Volume, using the
Definition of the StorageClass rook-cephfs
I've created a PVC, with the resultant PV created:
Results of crating the PVC and the CSI creating the PV:
I've tried mounting the volume as such using the direct-mount pod.
and the result is:
and finally, these are the running pods. Please take note of the rook-ceph-mds-cephfs-a and b pods running:
|
Results of ceph status
|
it says you have both mds in standby mode. tagging @travisn for help |
Is these 2 mds getting used by anyone? |
How can i check? Here are the logs:
and
|
The MDS status looks valid. There are two filesystems, each with one active and one standby, for a total of 4 mds pods. So the |
Any ideas why it doesn't mount? |
Could anything in your cluster have changed outside of rook? Network? Kernel? Mounting issues are usually some environmental issue like that, but @Madhu-1 can speak more to those issues. |
This is a new cluster setup on Hertzner. It's literally a few weeks old. I think they're is a bug here as I'm not the only one facing this. |
I'll give this a try tomorrow morning and report back here |
Same issue with rook 1.12.4 + hostnetwork: true ((
|
I haven't gotten around to looking at this in more detail. TL;DR: I need to edit all CephFS PV's? Here's a sample CephFS PV I have:
Here's a PVC:
Not seeing any mount options to remove, so I am not sure what to make of it. With that in mind, I go back to the original point, which was that all I need to do is remove and recreate the storage class and remove the mountoption in the helm chart (I use discard)? If that's true, can I add back the discard option after the upgrade? I kind of need discard on....... |
Yes you need to remove it from the already created PV's and you need to recreate the storageclass as well/or removing from helm value and doing helm upgrade can only update it in storageclass not in the existing PV's
discard is not required for CephFS storageclass they are required for the RBD Storageclasses. |
There is no mount option on the CephFS PVC or PV.............. ? So... nothing to do?
Noted. I've removed it from the helm chart (rook-ceph-cluster). |
I'm going to mark this issue as resolved. |
Is this a bug report or feature request?
Deviation from expected behavior:
Volume mount should work but does not
Expected behavior:
Moutning should work
How to reproduce it (minimal and precise):
RKE2 V24 Kubernetes on Ubuntu 22.0.4 VM's, upgrade Rook from 11.10 V12
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Pod logs after upgrading and trying to mount volumes:
Cluster Status to submit:
Output of krew commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, use
kubectl rook-ceph ceph status
For more details, see the Rook Krew Plugin
Environment:
uname -a
):rook version
inside of a Rook Pod): V12ceph -v
): 17.2.6kubectl version
): v24ceph health
in the Rook Ceph toolbox):The text was updated successfully, but these errors were encountered: