New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adds purge support for the lvm_osds osd scenario #1797
Conversation
@@ -321,6 +326,22 @@ | |||
- ceph_disk_present.rc == 0 | |||
- ceph_data_partlabels.rc == 0 | |||
|
|||
- name: remove osd logical volumes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add a comment here so that we know this needs to go away once ceph-volume lvm zap
becomes available?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure
@jharriga would you mind taking a look at this please? |
0aa0e6f
to
6696cd0
Compare
@@ -77,15 +77,15 @@ | |||
- not osd_auto_discovery | |||
- lvm_volumes|length == 0 | |||
|
|||
- name: make sure the lvm_volumes variable is a dictionary | |||
- name: make sure the lvm_volumes variable is a list | |||
fail: | |||
msg: "lvm_volumes: must be a dictionary" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this needs to be updated
The lvm_volumes variable is now a list of dictionaries that represent each OSD you'd like to deploy using ceph-volume. Each dictionary must have the following keys: data, journal and data_vg. Each dictionary also can optionaly provide a journal_vg key. The 'data' key represents the lv name used for the OSD and the 'data_vg' key is the vg name that the given lv resides on. The 'journal' key is either an lv, device or partition. The 'journal_vg' key is optional and must be the vg name for the journal lv if given. This key is mainly used for purging of the journal lv if purge-cluster.yml is run. For example: lvm_volumes: - data: data_lv1 journal: journal_lv1 data_vg: vg1 journal_vg: vg2 - data: data_lv2 journal: /dev/sdc data_vg: vg1 Signed-off-by: Andrew Schoen <aschoen@redhat.com>
6696cd0
to
68203c5
Compare
This also adds a new testing scenario for purging lvm osds Signed-off-by: Andrew Schoen <aschoen@redhat.com>
68203c5
to
bed5757
Compare
jenkins test luminous-ansible2.3-update_docker_cluster |
The failure in |
I just ran this against my cluster and it completed the purge with no errors. |
One observation. I attempted to reinstall the cluster after the purge and it failed - no LV's. |
@jharriga Yes, when purging it does |
This is also how purge works for regular disks, the partitions get destroyed. The one difference, and where we divert (although temporarily) is that there is no facility to create/re-create lv's, unlike ceph-disk which will create them again |
Just wanted to check that the difference of not currently create/re-create is temporary. |
To add this support, restructuring of
lvm_volumes
was necessary. The new format provides more flexibility in regards to how you define your lvm-based OSDs.This also adds a
purge_lvm_osds
testing scenario to ensure it works correctly.Fixes: #1787