Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ceph: ceph-volume provisioning does not use WAL size from CephCluster CR #5026

Closed
BlaineEXE opened this issue Mar 13, 2020 · 4 comments
Closed

Comments

@BlaineEXE
Copy link
Member

Is this a bug report or feature request?

  • Bug Report

Expected behavior:
I think users would rightly expect that if they specify a size for the WAL for an OSD, Rook should configure the OSD to use that WAL size. In the case here, a user is trying to leave extra space on the NVMe device for adding more OSDs to the cluster later.

Deviation from expected behavior:
Instead, Rook assumes ceph-volume will do the right thing without passing any user requests for WAL size, and ceph-volume is using the entirety of the NVMe device.

How to reproduce it (minimal and precise):
Set walSizeMB on a per-device basis in the storage config.

File(s) to submit:

cluster.yaml snippet

# ...
  storage:
    config: null
    devices:
    - config:
        metadataDevice: nvme0n1
        walSizeMB: "65536"
      name: sdb
    - config:
        metadataDevice: nvme0n1
        walSizeMB: "65536"
      name: sdc
    - config:
        metadataDevice: nvme0n1
        walSizeMB: "65536"
      name: sdd
    - config:
        metadataDevice: nvme0n1
        walSizeMB: "65536"
      name: sde
    - config:
        metadataDevice: nvme0n1
        walSizeMB: "65536"
      name: sdf
    - config:
        metadataDevice: nvme0n1
        walSizeMB: "65536"
      name: sdg
# ...

OSD prepare pod logs:
osd-prepare-pod.log

ceph-volume command issued

2020-03-13 21:52:15.791784 I | exec: Running command: stdbuf -oL ceph-volume lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sdf /dev/sdg /dev/sdb /dev/sdc /dev/sdd /dev/sde --db-devices /dev/nvme0n1 --report

Notes:

  • I can imagine this scenario where a user accidentally makes a typo in one of the configs and specifies that one OSD should have a larger/smaller WAL than the rest. Catching this case would be hard, and it seems to me that if configs are set on individual devices, maybe we shouldn't use ceph-volume lvm batch on all the devices and instead create OSDs one-by-one.
@BlaineEXE BlaineEXE added this to the 1.3 milestone Mar 13, 2020
@BlaineEXE BlaineEXE changed the title Ceph: ceph-volume provisioning does not use Ceph: ceph-volume provisioning does not use WAL size from CephCluster CR Apr 6, 2020
@stale
Copy link

stale bot commented Jul 5, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 5, 2020
@stale stale bot removed the wontfix label Jul 6, 2020
@BlaineEXE BlaineEXE removed this from the 1.3 milestone Apr 12, 2021
@travisn travisn removed the keepalive label Oct 27, 2021
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions
Copy link

github-actions bot commented Jan 3, 2022

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@github-actions github-actions bot closed this as completed Jan 3, 2022
@JamesDChilds
Copy link

JamesDChilds commented Apr 13, 2024

@travisn @BlaineEXE I'm still having this issue today. Was there some solution that resulted in this getting closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants