-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: adds a purge_cluster_collocated scenario #1215
Conversation
This PR needs ceph/ceph-build#605 for the |
--extra-vars="ireallymeanit=yes fetch_directory={changedir}/fetch" | ||
purge_cluster_collocated: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars="fetch_directory={changedir}/fetch" | ||
|
||
purge_cluster_collocated: testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts {toxinidir}/tests/functional/tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have tests that check actual cluster purging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what it's testing here is that after purging we can bring the cluster back up in the correct state.
@@ -39,4 +41,9 @@ commands= | |||
ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/setup.yml | |||
|
|||
testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts {toxinidir}/tests/functional/tests | |||
|
|||
purge_cluster_collocated: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/infrastructure-playbooks/purge-cluster.yml --extra-vars="ireallymeanit=yes fetch_directory={changedir}/fetch" | |||
purge_cluster_collocated: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars="fetch_directory={changedir}/fetch" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this wasn't immediately apparent to me (2 calls to ansible, one to purge another one to set everything up again)
can you add a simple comment just before this?
Using failed_when will still throw an exception and stop the playbook if the file you're trying to include doesn't exist. Signed-off-by: Andrew Schoen <aschoen@redhat.com>
In my testing zapping the osd disks deleted the journal partitions, making the 'zap ceph journal partitions' task fail because the partitions it found previously do not exist anymore. This moves the task that finds the journal partitions after 'zap osd disks' to catch any partitions ceph-disk might have missed. Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This scenario brings up a 1 mon 1 osd cluster using journal collocation, purges the cluster and then verifies it can redeploy the cluster. Signed-off-by: Andrew Schoen <aschoen@redhat.com>
e46475e
to
0ce18da
Compare
This test verifies that you can purge a cluster using journal collocation and then redeploy the cluster again. There were some minor fixes to
purge-cluster.yml
needed for the playbook to work.