You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we can define crush location for each host but only crush roots and crush rules are created (see https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-mon/tasks/crush_rules.yml). This pull request automates other routines for having a complete solution:
1) Creates rack type crush buckets defined in {{ ceph_crush_rack }} of each osd host. If it's not defined by user then a rack named 'default_rack_{{ ceph_crush_root }}' would be added and used in next steps.
2) Move rack type crush buckets defined in {{ ceph_crush_rack }} into crush roots defined in {{ ceph_crush_root }} of each osd host.
3) Move hosts defined in {{ ceph_crush_rack }} into crush roots defined in {{ ceph_crush_root }} of each osd host.
If user customizes {{ osd_crush_location }} by using other variable names instead of {{ ceph_crush_rack }} and {{ ceph_crush_root }} variables then the solution will obviously fail. Also, host type bucket are assumed to be named as {{ ansible_hostname}} of the host. As a weak workaround we can add comments with warning in mons.yml file but for proper solution, we need to parse {{ osd_crush_location }} of each host for "root=([^\s]+)" and similar patterns. Do you know any examples of doing such things in ansible scripts? Another problem is that {{ osd_crush_location }} is defined in osds.yml thus not available during execution of crush_rules.yml on mons. Maybe we should move these crush routines to ods? I assume we will have to fetch admin key there temporarily for executing crush commands, right?
If we move execution to OSD hosts (see 1), then it'll be easy to make it's execution parallel by removing 'run_once: true' and osd group's items iteration but I have concerns on being able to work properly on large clusters when all hosts would try to execute crush commands simultaneously. What do you think about it?
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hello,
Currently, we can define crush location for each host but only crush roots and crush rules are created (see https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-mon/tasks/crush_rules.yml). This pull request automates other routines for having a complete solution:
1) Creates rack type crush buckets defined in {{ ceph_crush_rack }} of each osd host. If it's not defined by user then a rack named 'default_rack_{{ ceph_crush_root }}' would be added and used in next steps.
2) Move rack type crush buckets defined in {{ ceph_crush_rack }} into crush roots defined in {{ ceph_crush_root }} of each osd host.
3) Move hosts defined in {{ ceph_crush_rack }} into crush roots defined in {{ ceph_crush_root }} of each osd host.
Draft implementation is in #2194.
Some questions and concerns:
The text was updated successfully, but these errors were encountered: