Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bluestore/NVMEDevice.cc: fix ceph_assert() when enable SPDK with 64KB kernel page size #24817

Merged
merged 2 commits into from
Nov 8, 2018

Conversation

tone-zhang
Copy link
Contributor

@tone-zhang tone-zhang commented Oct 30, 2018

When enable SPDK in Ceph with 64KB kernel page size, observed the
ceph_assert() in NVMEDevice.

This patch fixes the problem.

Fixes: http://tracker.ceph.com/issues/36624

Signed-off-by: tone.zhang tone.zhang@arm.com

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

@@ -99,7 +101,7 @@ class SharedDriverData {
ctrlr(c),
ns(ns_) {
sector_size = spdk_nvme_ns_get_sector_size(ns);
block_size = std::max(CEPH_PAGE_SIZE, sector_size);
block_size = std::max<uint64_t>(CEPH_NVME_BLK_SIZE, sector_size);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably we can just use the spdk_nvme_ns_get_extended_sector_size(ns) for block size, because in this case we are performing i/o on an NVMe device, the block size is sector + meta data.

Copy link
Contributor

@tchaikov tchaikov Oct 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tchaikov yes, we can assign block_size with the spdk_nvme_ns_get_extended_sector_size(ns), and remove sector_size parameter. Thanks for the comments. I will update the change soon.

block_size = std::max(CEPH_PAGE_SIZE, sector_size);
size = ((uint64_t)sector_size) * spdk_nvme_ns_get_num_sectors(ns);
block_size = spdk_nvme_ns_get_extended_sector_size(ns);
size = block_size * spdk_nvme_ns_get_num_sectors(ns);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tone-zhang sorry for the confusion. i don't think we can calc the size using block_size * spdk_nvme_ns_get_num_sectors(ns);, instead, we can either use

size = spdk_nvme_ns_get_size(ns)

or

size = sector_size * spdk_nvme_ns_get_num_sectors(ns);

as block size is the unit for addressing, which takes the overhead (metadata) into consideration, while the "size" here is the total size we can use for as data storage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I think spdk_nvme_ns_get_size(ns) is better. I will update the code.

src/os/bluestore/NVMEDevice.cc Show resolved Hide resolved
… kernel page size

When enable SPDK in Ceph with 64KB kernel page size, observed the
ceph_assert() in NVMEDevice. In SPDK NVME driver, the block size
should be the sector size of the NVMe device, not system page size.
Get the correct sector size by SPDK API.

This patch corrects the NVME block size and fixes the problem.

Fixes: http://tracker.ceph.com/issues/36624

Signed-off-by: tone.zhang <tone.zhang@arm.com>
In SharedDriverData, the element sector_size is redundant. After
pick up the latest verison of SPDK 18.07, the element block_size
takes the same role, then remove sector_size.

Signed-off-by: tone.zhang <tone.zhang@arm.com>
@tone-zhang
Copy link
Contributor Author

@batrick @tchaikov Hi Patrick and Kefu, could you please have a review the update? Thanks a lot!

@tchaikov tchaikov merged commit da0401a into ceph:master Nov 8, 2018
@tone-zhang
Copy link
Contributor Author

@tchaikov Kefu, thanks a lot! ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants