Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
ceph-disk: enable --runtime ceph-osd systemd units
If ceph-osd@.service is enabled for a given device (say /dev/sdb1 for osd.3) the ceph-osd@3.service will race with ceph-disk@dev-sdb1.service at boot time. Enabling ceph-osd@3.service is not necessary at boot time because ceph-disk@dev-sdb1.service calls ceph-disk activate /dev/sdb1 which calls systemctl start ceph-osd@3 The systemctl enable/disable ceph-osd@.service called by ceph-disk activate is changed to add the --runtime option so that ceph-osd units are lost after a reboot. They are recreated when ceph-disk activate is called at boot time so that: systemctl stop ceph knows which ceph-osd@.service to stop when a script or sysadmin wants to stop all ceph services. Before enabling ceph-osd@.service (that happens at every boot time), make sure the permanent enablement in /etc/systemd is removed so that only the one added by systemctl enable --runtime in /run/systemd remains. This is useful to upgrade an existing cluster without creating a situation that is even worse than before because ceph-disk@.service races against two ceph-osd@.service (one in /etc/systemd and one in /run/systemd). Fixes: http://tracker.ceph.com/issues/17889 Signed-off-by: Loic Dachary <loic@dachary.org>
- Loading branch information
539385bThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible this is causing issues for directory based OSDs for which ceph-disk@ instances do not exist?