Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph-disk: support lvm2 volumes #13994

Closed
wants to merge 2 commits into from
Closed

Conversation

wind0204
Copy link

@wind0204 wind0204 commented Mar 16, 2017

Hi, it's very naive and simple implementation; hope a ceph developer will polish it up a bit.

I still need to do a 'partprobe' and a 'udevadm trigger' on every boot-up so that 1) the device-mapped virtual block devices for the lvm partitions show up and 2) the udev rule populates the directory /dev/disk/by-partuuid which is used by the symbolic link of the filestore journal prepared by ceph-disk.

a related mailing list thread : https://www.mail-archive.com/ceph-users@lists.ceph.com/msg36362.html

Signed-off-by: Gunwoo Gim (a.k.a. Nicho1as) wind8702@gmail.com

Signed-off-by: Gunwoo Gim (a.k.a. Nicho1as) <wind8702@gmail.com>
@liewegas liewegas changed the title make filestore roughly support lvm2 volumes ceph-disk: support lvm2 volumes Mar 24, 2017
@wind0204
Copy link
Author

wind0204 commented Apr 5, 2017

I'm afraid I have to say I just found out after rebooting two osd nodes that the osds don't get up and running by the udev rules in /lib/udev/rules.d/95-ceph-osd.rules; I had to run the command myself:

thename=hdd1 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd2 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd3 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd4 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd5 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd6 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd7 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd8 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd9 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1
thename=hdd10 && /usr/sbin/ceph-disk --log-stdout -v trigger /dev/mapper/vg--${thename}-lv--${thename}p1

@wind0204 wind0204 force-pushed the pr-lvm-support branch 2 times, most recently from 8533148 to be63ff6 Compare April 6, 2017 05:40
@wind0204
Copy link
Author

wind0204 commented Apr 6, 2017

I made the second commit a couple of hours ago with error and then I just now forced an update of the second commit for the sake of concise git log. now the problem of the udev rule not being triggered when a ceph-osd lvm2 partition is added to the system is fixed. but the fix I just uploaded is very naive; please check out the message of the commit: 8bc6d83

The lvm2 partitions don't come with the environtment variable 'DEVTYPE'
set to 'partition'; it instead comes with the variable set to 'disk'.
So this makeshift patch makes the udev rule less strict stopping it from
checking ENV{DEVTYPE}.
I believe this is a very naive approach and a preferable solution is
making the lvm2 partitions come with 'DEVTYPE' set correctly.

Signed-off-by: Gunwoo Gim (a.k.a. Nicho1as) <wind8702@gmail.com>
@smithfarm
Copy link
Contributor

Needs rebase

@ghost
Copy link

ghost commented Apr 18, 2017

would you be so kind as to add integration tests as well to demonstrate it works as expected ? See qa/workunits/ceph-disk. And let me know if you need help with it ;)

@alfredodeza
Copy link
Contributor

hey @wind0204 we are going to have a separate tool for deploying OSDs from LVM volumes, ceph-disk is not going to support this.

The documentation is at http://docs.ceph.com/ceph-lvm/master/index.html and we should have a release in the next few months. Let us know if you have any ideas/feedback.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants