You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expected behavior:
I think users would rightly expect that if they specify a size for the WAL for an OSD, Rook should configure the OSD to use that WAL size. In the case here, a user is trying to leave extra space on the NVMe device for adding more OSDs to the cluster later.
Deviation from expected behavior:
Instead, Rook assumes ceph-volume will do the right thing without passing any user requests for WAL size, and ceph-volume is using the entirety of the NVMe device.
How to reproduce it (minimal and precise):
Set walSizeMB on a per-device basis in the storage config.
I can imagine this scenario where a user accidentally makes a typo in one of the configs and specifies that one OSD should have a larger/smaller WAL than the rest. Catching this case would be hard, and it seems to me that if configs are set on individual devices, maybe we shouldn't use ceph-volume lvm batch on all the devices and instead create OSDs one-by-one.
The text was updated successfully, but these errors were encountered:
BlaineEXE
changed the title
Ceph: ceph-volume provisioning does not use
Ceph: ceph-volume provisioning does not use WAL size from CephCluster CR
Apr 6, 2020
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
Is this a bug report or feature request?
Expected behavior:
I think users would rightly expect that if they specify a size for the WAL for an OSD, Rook should configure the OSD to use that WAL size. In the case here, a user is trying to leave extra space on the NVMe device for adding more OSDs to the cluster later.
Deviation from expected behavior:
Instead, Rook assumes ceph-volume will do the right thing without passing any user requests for WAL size, and ceph-volume is using the entirety of the NVMe device.
How to reproduce it (minimal and precise):
Set
walSizeMB
on a per-device basis in the storage config.File(s) to submit:
cluster.yaml
snippetOSD prepare pod logs:
osd-prepare-pod.log
ceph-volume command issued
Notes:
ceph-volume lvm batch on all the devices
and instead create OSDs one-by-one.The text was updated successfully, but these errors were encountered: