New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph-volume: fix raw listing when finding OSDs from different clusters #40979
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When listing OSDs on host with 2 OSDs with the same ID, the output gets overwritten with the last listed device. So a single OSD will show up. See the ceph-volume.log which correctly parsed both disks: ``` [2021-04-22 09:44:21,391][ceph_volume.devices.raw.list][DEBUG ] Examining /dev/sda1 [2021-04-22 09:44:21,391][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sda1 [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout { [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "/dev/sda1": { [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "osd_uuid": "423bf64d-f241-4f4b-a589-25a66fc836d1", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "size": 6442450944, [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "btime": "2021-04-22T09:32:55.894961+0000", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "description": "main", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "bfm_blocks": "1572864", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "bfm_blocks_per_key": "128", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "bfm_bytes_per_block": "4096", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "bfm_size": "6442450944", [2021-04-22 09:44:21,418][ceph_volume.process][INFO ] stdout "bluefs": "1", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "ceph_fsid": "d3cd4b72-5342-4fd3-96ec-a6e581261eab", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "kv_backend": "rocksdb", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "magic": "ceph osd volume v026", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "mkfs_done": "yes", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "osd_key": "AQDGQoFg+XHqJBAAw9ZQmtrnotHCLI0Nc2to6A==", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "ready": "ready", [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout "whoami": "0" [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout } [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] stdout } [2021-04-22 09:44:21,419][ceph_volume.devices.raw.list][DEBUG ] Examining /dev/sda2 [2021-04-22 09:44:21,419][ceph_volume.process][INFO ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sda2 [2021-04-22 09:44:21,445][ceph_volume.process][INFO ] stdout { [2021-04-22 09:44:21,445][ceph_volume.process][INFO ] stdout "/dev/sda2": { [2021-04-22 09:44:21,445][ceph_volume.process][INFO ] stdout "osd_uuid": "c7c66bbd-7b38-4dcd-ad6d-3769c516f2fe", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "size": 6442450944, [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "btime": "2021-04-22T09:32:21.814768+0000", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "description": "main", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "bfm_blocks": "1572864", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "bfm_blocks_per_key": "128", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "bfm_bytes_per_block": "4096", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "bfm_size": "6442450944", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "bluefs": "1", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "ceph_fsid": "69c40cb1-22af-42e4-9d59-4a4468a2f58f", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "kv_backend": "rocksdb", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "magic": "ceph osd volume v026", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "mkfs_done": "yes", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "osd_key": "AQCkQoFgre9SKBAANgHH6scIb+IiyKxh6MhY0A==", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "ready": "ready", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "require_osd_release": "16", [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout "whoami": "0" [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout } [2021-04-22 09:44:21,446][ceph_volume.process][INFO ] stdout } ``` However, a single OSD gets listed by `ceph-volume raw list`: ``` [root@2b5a3b8bf31c /]# ceph-volume raw list { "0": { "ceph_fsid": "69c40cb1-22af-42e4-9d59-4a4468a2f58f", "device": "/dev/sda2", "osd_id": 0, "osd_uuid": "c7c66bbd-7b38-4dcd-ad6d-3769c516f2fe", "type": "bluestore" } } ``` We now use the osd_uuid so the output will never conflict: ``` [root@2b5a3b8bf31c /]# ceph-volume raw list { "423bf64d-f241-4f4b-a589-25a66fc836d1": { "ceph_fsid": "d3cd4b72-5342-4fd3-96ec-a6e581261eab", "dev": "/dev/sda1", "osd_id": 0, "osd_uuid": "423bf64d-f241-4f4b-a589-25a66fc836d1", "type": "bluestore" }, "c7c66bbd-7b38-4dcd-ad6d-3769c516f2fe": { "ceph_fsid": "69c40cb1-22af-42e4-9d59-4a4468a2f58f", "dev": "/dev/sda2", "osd_id": 0, "osd_uuid": "c7c66bbd-7b38-4dcd-ad6d-3769c516f2fe", "type": "bluestore" } } ``` Fixes: https://tracker.ceph.com/issues/50478 Signed-off-by: Sébastien Han <seb@redhat.com>
liewegas
approved these changes
Apr 22, 2021
3 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When listing OSDs on host with 2 OSDs with the same ID, the output gets
overwritten with the last listed device. So a single OSD will show up.
See the ceph-volume.log which correctly parsed both disks:
However, a single OSD gets listed by
ceph-volume raw list
:We now use the osd_uuid so the output will never conflict:
Fixes: https://tracker.ceph.com/issues/50478
Signed-off-by: Sébastien Han seb@redhat.com
Checklist
Show available Jenkins commands
jenkins retest this please
jenkins test classic perf
jenkins test crimson perf
jenkins test signed
jenkins test make check
jenkins test make check arm64
jenkins test submodules
jenkins test dashboard
jenkins test api
jenkins test docs
jenkins render docs
jenkins test ceph-volume all
jenkins test ceph-volume tox