-
Notifications
You must be signed in to change notification settings - Fork 2.7k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph-volume lvm batch is not creating OSDs on partitions in latest Nautilus v14.2.13 and Octopus v15.2.8 #6849
Comments
Looks related to this change in c-v: ceph/ceph#38280 |
Runned into this issue. What should I do for now? |
If you need to create OSDs on partitions, you'll need to use Ceph v14.2.12 or v15.2.7 while we are following up on the issue. |
Not sure what is the right fix for this in 1.5, #4879 is appealing but has its own limitations... |
I'm also affected by this problem. Is there a workaround? |
I am also looking for a potential fix for this! |
@hasheddan @valkmit what about this? #6849 (comment) is it not feasible for you? |
@leseb with v15.2.7 I get a different error. Full log. Tail of the log:
|
I don't know what's going on, could please open a bug here? https://tracker.ceph.com/ under the ceph-volume component? |
@leseb Thanks, I will do so after my registration is approved. Could it be that it's not allowed to have metadata pools on partitions? See https://tracker.ceph.com/issues/47966#note-3 |
Could be but I'm not sure, c-v's partition support is still confusing. |
@travisn does this issue affects upgrades as well. Let's say I am using:
with this setup I am running on raw partitions. Will I be able to upgrade to version for example:
Asking in shorter way: |
@DjangoCalendar You can upgrade and use your existing OSD's. But if you want to setup a fresh OSD it wont work on partitions. |
Hi, I have rook-ceph-v1.5.8 and I'm getting this issue if I use anything else than ceph/ceph:v15.2.7 image in my CephCluster definition.
I seems there is a regression after 15.2.7 :(. But in fact isn't it a pure ceph issue ? To summarize for others coming here: use ceph/ceph:v15.2.7 for the CephCluster image: |
Let me summarize (hopefully) once and for all. Problem: ceph-volume lvm is not creating OSDs on partitions anymore as of Nautilus v14.2.13 and Octopus v15.2.8 and onward. The error can be seen from the osd prepare job logs and resemble:
Solutions:
I'm converting to a Discussion so people can continue discussing but highlight the latest solution better. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Is this a bug report or feature request?
Deviation from expected behavior:
In the latest Nautilus v14.2.15 and Octopus 15.2.8,
ceph-volume lvm batch
is not allowing an OSD to be created on a raw partition.In the integration tests we are seeing this with the following in the osd prepare job:
Expected behavior:
Raw partitions have been working and expected to continue working.
How to reproduce it (minimal and precise):
Attempt to create an OSD on a partition with v15.2.8.
The text was updated successfully, but these errors were encountered: