Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to deploy osd/db on nvme with other db logical volume #625

Closed
pubyun opened this issue Nov 3, 2023 · 5 comments
Closed

failed to deploy osd/db on nvme with other db logical volume #625

pubyun opened this issue Nov 3, 2023 · 5 comments

Comments

@pubyun
Copy link

pubyun commented Nov 3, 2023

we have a running ceph, 17.2.7, with SATA OSD and DB on nvme.
we insert some SATA to host, and the status of new host is AVAILABLE.
then we apply the osd-spec.yml, it can't create the OSD automatically.

ceph orch device ls

HOST PATH TYPE DEVICE ID
SIZE AVAILABLE REJECT REASONS
h172-18-100-100 /dev/nvme0n1 ssd INTEL SSDPF2KX038TZ_PHAC1036009Z3P8AGN
3840G LVM detected, locked
h172-18-100-100 /dev/sdb hdd ST16000NM000G-2K_ZL2CB8ZR
16.0T Insufficient space (<10 extents) on vgs, LVM detected,
locked
h172-18-100-100 /dev/sdc hdd ST16000NM000G-2K_ZL2CB0J2
16.0T Insufficient space (<10 extents) on vgs, LVM detected,
locked
h172-18-100-100 /dev/sdd hdd ST16000NM000G-2K_ZL2CBFSF
16.0T Insufficient space (<10 extents) on vgs, LVM detected,
locked
h172-18-100-100 /dev/sde hdd ST16000NM000G-2K_ZL2CAYQB
16.0T Insufficient space (<10 extents) on vgs, LVM detected,
locked
h172-18-100-100 /dev/sdf hdd ST16000NM000G-2K_ZL2CBEMC
16.0T Yes
h172-18-100-100 /dev/sdg hdd ST16000NM000G-2K_ZL2C427J
16.0T Yes
h172-18-100-100 /dev/sdh hdd ST16000NM000G-2K_ZL2CAZCZ
16.0T Yes
h172-18-100-100 /dev/sdi hdd ST16000NM000G-2K_ZL2CBM7M
16.0T Yes

osd-spec.yml:

service_type: osd
service_id: osd-spec
placement:
host_pattern: '*'
spec:
objectstore: bluestore
block_db_size: 73014444032
data_devices:
rotational: 1
db_devices:
rotational: 0

ceph orch apply osd -i osd-spec.yml --dry-run

@pubyun
Copy link
Author

pubyun commented Nov 3, 2023

here is some information:

ceph-volume sets DB device to unavailable, additional OSDs can not use the DB device
https://www.suse.com/support/kb/doc/?id=000019599

@pubyun
Copy link
Author

pubyun commented Nov 3, 2023

we have can create and activate the OSD with:

ceph-volume lvm prepare --no-systemd --bluestore --data /dev/sdh
--block.db /dev/nvme0n1 --block.db-size 73014444032

we should ssh the host and execute the command for each OSD. if we have to
add many OSD, it will take lots of time.

@dvanders
Copy link
Contributor

dvanders commented Jul 3, 2024

Please send ceph questions to the ceph-users mailing list.

@dvanders dvanders closed this as completed Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants