Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

osd: support create osd with metadata partition #13314

Merged
merged 1 commit into from Feb 8, 2024

Conversation

microyahoo
Copy link
Member

@microyahoo microyahoo commented Dec 4, 2023

Which issue is resolved by this Pull Request:
Resolves #13280

Currently, when rook provisions OSDs(in the OSD prepare job), rook effectively run a c-v command such as the following.

ceph-volume lvm batch --prepare <deviceA> <deviceB> <deviceC> --db-devices <metadataDevice>

but c-v lvm batch only supports disk and lvm, instead of disk partitions. We can resort to ceph-volume lvm prepare to implement it.

Signed-off-by: Liang Zheng zhengliang0901@gmail.com

Checklist:

  • Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
  • Reviewed the developer guide on Submitting a Pull Request
  • Pending release notes updated with breaking and/or notable changes for the next minor release.
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.

@microyahoo microyahoo force-pushed the partition branch 2 times, most recently from a98e74f to 95a99d6 Compare December 4, 2023 08:38
@microyahoo microyahoo changed the title [WIP] feature: support create osd with metadata partition [WIP] osd: support create osd with metadata partition Dec 4, 2023
@satoru-takeuchi satoru-takeuchi marked this pull request as draft December 5, 2023 20:54
@satoru-takeuchi satoru-takeuchi added the do-not-merge DO NOT MERGE :) label Dec 5, 2023
@satoru-takeuchi
Copy link
Member

@travisn By this change, the will be difference in the supported configuration depends on whether metadataDevice is shared by multiple OSDs or not (ref). IMO, it's acceptable and updating the supported configuration is enough. Could you tell me your opinion?

Copy link
Member

@travisn travisn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@travisn By this change, the will be difference in the supported configuration depends on whether metadataDevice is shared by multiple OSDs or not (ref). IMO, it's acceptable and updating the supported configuration is enough. Could you tell me your opinion?

This PR doesn't have a description, so I'm trying to understand from the linked issue... So if the metadataDevice is a partition, we must use lvm prepare instead of lvm batch? Then this change seems reasonable.

I would just ask the following for maintainability:

  1. Tests are needed in the CI
  2. More comments in the code changes and in the commit description to describe the change.

@satoru-takeuchi
Copy link
Member

@travisn

This PR doesn't have a description, so I'm trying to understand from the linked issue...

Oh, I should have explained this PR in detail.

So if the metadataDevice is a partition, we must use lvm prepare instead of lvm batch?

Correct.

I would just ask the following for maintainability:

Tests are needed in the CI
More comments in the code changes and in the commit description to describe the change.

Yes, I agree with you. I'm going to comment it later.

@microyahoo Please wait for a while to review this. I don't have enough time in this week.

@microyahoo
Copy link
Member Author

microyahoo commented Dec 6, 2023

@travisn By this change, the will be difference in the supported configuration depends on whether metadataDevice is shared by multiple OSDs or not (ref). IMO, it's acceptable and updating the supported configuration is enough. Could you tell me your opinion?

This PR doesn't have a description, so I'm trying to understand from the linked issue... So if the metadataDevice is a partition, we must use lvm prepare instead of lvm batch? Then this change seems reasonable.

I will add more detailed description for PR.

I would just ask the following for maintainability:

  1. Tests are needed in the CI

If this solution works well I will add unit tests and integration tests.

  1. More comments in the code changes and in the commit description to describe the change.

OK.

@microyahoo microyahoo marked this pull request as ready for review December 15, 2023 10:05
@microyahoo microyahoo changed the title [WIP] osd: support create osd with metadata partition osd: support create osd with metadata partition Dec 15, 2023
@travisn
Copy link
Member

travisn commented Dec 15, 2023

@microyahoo Thanks for all the unit tests. I will wait for @satoru-takeuchi to review it and suggest if integration tests are needed.

@microyahoo
Copy link
Member Author

@microyahoo Thanks for all the unit tests. I will wait for @satoru-takeuchi to review it and suggest if integration tests are needed.

yes, integration tests are also needed, I will add it ASAP. Do the integration tests need to be merged together with this PR, or can another PR be merged separately?

@satoru-takeuchi
Copy link
Member

Do the integration tests need to be merged together with this PR,

Yes, please add an integration test.

@microyahoo
Copy link
Member Author

Do the integration tests need to be merged together with this PR,

Yes, please add an integration test.

👌

@microyahoo
Copy link
Member Author

microyahoo commented Dec 18, 2023

Hi @satoru-takeuchi, @travisn I noticed github ci runner has only one 64G disks except for root disk, my test cases is similar to osd-with-metadata-device, but I have to create a partition from another disk as a metadata device first. How to add one more disk in ci runner? Thanks.

https://github.com/rook/rook/actions/runs/7220671504/job/19674086946?pr=13314

+ sudo lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0   64G  0 disk 
└─sda1    8:1    0   64G  0 part 
sdb       8:16   0   86G  0 disk 
├─sdb1    8:17   0 85.9G  0 part /
├─sdb14   8:30   0    4M  0 part 
└─sdb15   8:31   0  106M  0 part /boot/efi

@satoru-takeuchi
Copy link
Member

@microyahoo I'll answer tomorrow or the day after tomorrorw. Please wait for a while.

@satoru-takeuchi
Copy link
Member

@microyahoo You can create OSD on top of loop device if you set ROOK_CEPH_ALLOW_LOOP_DEVICES: "true" in operator.yaml. Could you try this feature?

@microyahoo
Copy link
Member Author

microyahoo commented Dec 21, 2023

@microyahoo You can create OSD on top of loop device if you set ROOK_CEPH_ALLOW_LOOP_DEVICES: "true" in operator.yaml. Could you try this feature?

yes, we can use loop device to create osd, but it will lvm, not partition type, not sure if I understand correctly. I will try it first, thanks.

@satoru-takeuchi
Copy link
Member

I meant using loo device for osd and use a partition in the 64 GiB disk as a metadata device. IIRC, a partition will be created if removing the --wipe-only You can do it by changing

tests/scripts/create-bluestore-partitions.sh --disk "$BLOCK" --wipe-only

to

tests/scripts/create-bluestore-partitions.sh --disk "$BLOCK".

@microyahoo microyahoo force-pushed the partition branch 4 times, most recently from 891466c to 459a6cb Compare December 23, 2023 13:26
@satoru-takeuchi
Copy link
Member

The review in progress. I'll send a feedback next week.

Copy link
Member

@satoru-takeuchi satoru-takeuchi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments about integration tests and documents but the review is still under progress. I'll make some other comments about go code later.

.github/workflows/canary-integration-test.yml Outdated Show resolved Hide resolved
.github/workflows/canary-integration-test.yml Outdated Show resolved Hide resolved
tests/scripts/github-action-helper.sh Outdated Show resolved Hide resolved
Documentation/CRDs/Cluster/ceph-cluster-crd.md Outdated Show resolved Hide resolved
.github/workflows/canary-integration-test.yml Outdated Show resolved Hide resolved
tests/scripts/github-action-helper.sh Outdated Show resolved Hide resolved
@satoru-takeuchi
Copy link
Member

satoru-takeuchi commented Feb 1, 2024

I'm reviewing and testing this PR. Please wait for a while. Probably I'll finish the review tomorrow.

Documentation/CRDs/Cluster/ceph-cluster-crd.md Outdated Show resolved Hide resolved
Documentation/CRDs/Cluster/ceph-cluster-crd.md Outdated Show resolved Hide resolved
pkg/daemon/ceph/osd/volume.go Outdated Show resolved Hide resolved
pkg/daemon/ceph/osd/volume.go Outdated Show resolved Hide resolved
pkg/daemon/ceph/osd/volume.go Outdated Show resolved Hide resolved
pkg/daemon/ceph/osd/volume_test.go Show resolved Hide resolved
@microyahoo microyahoo force-pushed the partition branch 2 times, most recently from 8ecd944 to 97ac157 Compare February 2, 2024 07:37
@microyahoo
Copy link
Member Author

microyahoo commented Feb 2, 2024

test cases failed due to BLOCK not found.

++ sudo lsblk --paths
++ awk '/14G/ || /64G/ {print $1}'
++ head -1
+ : ''

@subhamkrai
Copy link
Contributor

test cases failed due to BLOCK not found.

++ sudo lsblk --paths
++ awk '/14G/ || /64G/ {print $1}'
++ head -1
+ : ''

probably #13675 will fix

@microyahoo microyahoo force-pushed the partition branch 2 times, most recently from cacd0f4 to b3c0f3c Compare February 6, 2024 02:57
Copy link

mergify bot commented Feb 7, 2024

This pull request has merge conflicts that must be resolved before it can be merged. @microyahoo please rebase it. https://rook.io/docs/rook/latest/Contributing/development-flow/#updating-your-fork

Currently, when rook provisions OSDs(in the OSD prepare job), rook effectively run a
c-v command such as the following.
```console
ceph-volume lvm batch --prepare <deviceA> <deviceB> <deviceC> --db-devices <metadataDevice>
```
but c-v lvm batch only supports disk and lvm, instead of disk partitions.

We can resort to `ceph-volume lvm prepare` to implement it.

Signed-off-by: Liang Zheng <zhengliang0901@gmail.com>
@microyahoo
Copy link
Member Author

Hi @satoru-takeuchi, all test cases passed, PTAL, thanks.

@satoru-takeuchi
Copy link
Member

@travisn I approved this PR. Could you also review this? It's because you left change request to add tests and improve documentation. In my feeling, it doesn't break the existing cluster's behavior and is well tested. Doducments are well updated.

Copy link
Member

@travisn travisn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good with all the testing, thanks.

@satoru-takeuchi satoru-takeuchi merged commit f1aa4f3 into rook:master Feb 8, 2024
51 checks passed
mergify bot added a commit that referenced this pull request Feb 8, 2024
osd: support create osd with metadata partition (backport #13314)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

support create osd with different metadata devices
5 participants