Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qa: add openstack functional test to CI #1346

Merged
merged 6 commits into from Sep 5, 2018

Conversation

tserong
Copy link
Member

@tserong tserong commented Sep 5, 2018

If we don't initially delete the openstack pools, it means QA infra
can pre-create these pools with the right number of PGs for the test
environment.

Fixes: #1322
Signed-off-by: Tim Serong tserong@suse.com

If we don't initially delete the openstack pools, it means QA infra
can pre-create these pools with the right number of PGs for the test
environment.

Fixes: #1322
Signed-off-by: Tim Serong <tserong@suse.com>
@smithfarm
Copy link
Contributor

@tserong In the interests of expediency, I pushed the related qa changes into this PR.

@@ -42,6 +42,7 @@ function pre_create_pools {
sleep 10
POOLS="write_test"
test "$MDS" && POOLS+=" cephfs_data cephfs_metadata"
test "$OPENSTACK" && POOLS+=" smoketestcloud-backups smoketestcloud-volumes smoketestcloud-images smoketestcloud-vms"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tserong Prefix is set to "smoketest" in the functests right?

@smithfarm smithfarm force-pushed the wip-openstack-no-initial-clean branch from 27f83a5 to 023873e Compare September 5, 2018 10:09
@smithfarm
Copy link
Contributor

@susebot run teuthology

@smithfarm smithfarm force-pushed the wip-openstack-no-initial-clean branch 2 times, most recently from 66575e5 to 550653a Compare September 5, 2018 10:52
@susebot
Copy link
Collaborator

susebot commented Sep 5, 2018

Commit 023873e is NOT OK for suite suse:tier1.
Check tests results in the Jenkins job: http://158.69.90.90:8080/job/deepsea-pr/124/

@smithfarm
Copy link
Contributor

@susebot run teuthology

@susebot
Copy link
Collaborator

susebot commented Sep 5, 2018

Commit 550653a is NOT OK for suite suse:tier1.
Check tests results in the Jenkins job: http://158.69.90.90:8080/job/deepsea-pr/125/

Use this option when running ceph.functests.1node.openstack, to
pre-create the pools it needs.

Signed-off-by: Nathan Cutler <ncutler@suse.com>
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Fixes: #1349
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Signed-off-by: Nathan Cutler <ncutler@suse.com>
@smithfarm smithfarm force-pushed the wip-openstack-no-initial-clean branch from 5e743d9 to 349b7c0 Compare September 5, 2018 11:59
@smithfarm
Copy link
Contributor

@susebot run teuthology

@susebot
Copy link
Collaborator

susebot commented Sep 5, 2018

Commit 349b7c0 is NOT OK for suite suse:tier1.
Check tests results in the Jenkins job: http://158.69.90.90:8080/job/deepsea-pr/127/

@smithfarm
Copy link
Contributor

smithfarm commented Sep 5, 2018

@tserong So, I now have a test (not merged yet) which deploys the testing cluster with pre-creation of the following pools:

  • smoketest-cloud-backups
  • smoketest-cloud-volumes
  • smoketest-cloud-images
  • smoketest-cloud-vms

However, ceph.functests.1node.openstack still fails because it tries to create pools without the prefix:

2018-09-05T13:20:43.822 INFO:teuthology.orchestra.run.target149202163224.stdout:----------
2018-09-05T13:20:43.822 INFO:teuthology.orchestra.run.target149202163224.stdout:          ID: apply ceph.openstack
2018-09-05T13:20:43.822 INFO:teuthology.orchestra.run.target149202163224.stdout:    Function: salt.state
2018-09-05T13:20:43.822 INFO:teuthology.orchestra.run.target149202163224.stdout:      Result: False
2018-09-05T13:20:43.822 INFO:teuthology.orchestra.run.target149202163224.stdout:     Comment: Run failed on minions: target149202163224.teuthology
2018-09-05T13:20:43.822 INFO:teuthology.orchestra.run.target149202163224.stdout:     Started: 13:20:07.099932
2018-09-05T13:20:43.823 INFO:teuthology.orchestra.run.target149202163224.stdout:    Duration: 9753.568 ms
2018-09-05T13:20:43.823 INFO:teuthology.orchestra.run.target149202163224.stdout:     Changes:
2018-09-05T13:20:43.823 INFO:teuthology.orchestra.run.target149202163224.stdout:              target149202163224.teuthology:
2018-09-05T13:20:43.823 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph osd pool create cloud-images 128 - Function: cmd.run - Result: Changed Started: - 13:20:08.651968 Duration: 1023.903 ms
2018-09-05T13:20:43.823 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph osd pool application enable cloud-images rbd || : - Function: cmd.run - Result: Changed Started: - 13:20:09.701576 Duration: 1120.76 ms
2018-09-05T13:20:43.823 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: /srv/salt/ceph/openstack/cache/glance.keyring - Function: file.managed - Result: Changed Started: - 13:20:10.835202 Duration: 84.564 ms
2018-09-05T13:20:43.824 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph auth add client.glance -i /srv/salt/ceph/openstack/cache/glance.keyring - Function: cmd.run - Result: Changed Started: - 13:20:10.939214 Duration: 348.518 ms
2018-09-05T13:20:43.824 INFO:teuthology.orchestra.run.target149202163224.stdout:              ----------
2018-09-05T13:20:43.824 INFO:teuthology.orchestra.run.target149202163224.stdout:                        ID: cinder pool
2018-09-05T13:20:43.824 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Function: cmd.run
2018-09-05T13:20:43.824 INFO:teuthology.orchestra.run.target149202163224.stdout:                      Name: ceph osd pool create cloud-volumes 128
2018-09-05T13:20:43.824 INFO:teuthology.orchestra.run.target149202163224.stdout:                    Result: False
2018-09-05T13:20:43.825 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Comment: Command "ceph osd pool create cloud-volumes 128" run
2018-09-05T13:20:43.825 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Started: 13:20:11.288480
2018-09-05T13:20:43.825 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Duration: 1389.054 ms
2018-09-05T13:20:43.825 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Changes:
2018-09-05T13:20:43.825 INFO:teuthology.orchestra.run.target149202163224.stdout:                            ----------
2018-09-05T13:20:43.825 INFO:teuthology.orchestra.run.target149202163224.stdout:                            pid:
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                                56561
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                            retcode:
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                                34
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stderr:
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                                Error ERANGE:  pg_num 128 size 2 would mean 842 total pgs, which exceeds max 800 (mon_max_pg_per_osd 200 * num_in_osds 4)
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stdout:
2018-09-05T13:20:43.826 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph osd pool application enable cloud-volumes rbd || : - Function: cmd.run - Result: Changed Started: - 13:20:12.706459 Duration: 328.895 ms
2018-09-05T13:20:43.827 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: /srv/salt/ceph/openstack/cache/cinder.keyring - Function: file.managed - Result: Changed Started: - 13:20:13.035936 Duration: 66.728 ms
2018-09-05T13:20:43.827 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph auth add client.cinder -i /srv/salt/ceph/openstack/cache/cinder.keyring - Function: cmd.run - Result: Changed Started: - 13:20:13.130548 Duration: 346.277 ms
2018-09-05T13:20:43.827 INFO:teuthology.orchestra.run.target149202163224.stdout:              ----------
2018-09-05T13:20:43.827 INFO:teuthology.orchestra.run.target149202163224.stdout:                        ID: cinder-backup pool
2018-09-05T13:20:43.827 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Function: cmd.run
2018-09-05T13:20:43.827 INFO:teuthology.orchestra.run.target149202163224.stdout:                      Name: ceph osd pool create cloud-backups 128
2018-09-05T13:20:43.828 INFO:teuthology.orchestra.run.target149202163224.stdout:                    Result: False
2018-09-05T13:20:43.828 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Comment: Command "ceph osd pool create cloud-backups 128" run
2018-09-05T13:20:43.828 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Started: 13:20:13.477277
2018-09-05T13:20:43.828 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Duration: 1181.821 ms
2018-09-05T13:20:43.828 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Changes:
2018-09-05T13:20:43.828 INFO:teuthology.orchestra.run.target149202163224.stdout:                            ----------
2018-09-05T13:20:43.829 INFO:teuthology.orchestra.run.target149202163224.stdout:                            pid:
2018-09-05T13:20:43.829 INFO:teuthology.orchestra.run.target149202163224.stdout:                                56682
2018-09-05T13:20:43.829 INFO:teuthology.orchestra.run.target149202163224.stdout:                            retcode:
2018-09-05T13:20:43.829 INFO:teuthology.orchestra.run.target149202163224.stdout:                                34
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stderr:
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:                                Error ERANGE:  pg_num 128 size 2 would mean 842 total pgs, which exceeds max 800 (mon_max_pg_per_osd 200 * num_in_osds 4)
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stdout:
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph osd pool application enable cloud-backups rbd || : - Function: cmd.run - Result: Changed Started: - 13:20:14.678635 Duration: 306.827 ms
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: /srv/salt/ceph/openstack/cache/cinder-backup.keyring - Function: file.managed - Result: Changed Started: - 13:20:14.985918 Duration: 45.941 ms
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph auth add client.cinder-backup -i /srv/salt/ceph/openstack/cache/cinder-backup.keyring - Function: cmd.run - Result: Changed Started: - 13:20:15.050169 Duration: 322.157 ms
2018-09-05T13:20:43.830 INFO:teuthology.orchestra.run.target149202163224.stdout:              ----------
2018-09-05T13:20:43.831 INFO:teuthology.orchestra.run.target149202163224.stdout:                        ID: nova pool
2018-09-05T13:20:43.831 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Function: cmd.run
2018-09-05T13:20:43.831 INFO:teuthology.orchestra.run.target149202163224.stdout:                      Name: ceph osd pool create cloud-vms 128
2018-09-05T13:20:43.831 INFO:teuthology.orchestra.run.target149202163224.stdout:                    Result: False
2018-09-05T13:20:43.831 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Comment: Command "ceph osd pool create cloud-vms 128" run
2018-09-05T13:20:43.831 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Started: 13:20:15.372821
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Duration: 1112.935 ms
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Changes:
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                            ----------
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                            pid:
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                                56794
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                            retcode:
2018-09-05T13:20:43.832 INFO:teuthology.orchestra.run.target149202163224.stdout:                                34
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stderr:
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:                                Error ERANGE:  pg_num 128 size 2 would mean 842 total pgs, which exceeds max 800 (mon_max_pg_per_osd 200 * num_in_osds 4)
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stdout:
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: ceph osd pool application enable cloud-vms rbd || : - Function: cmd.run - Result: Changed Started: - 13:20:16.507723 Duration: 315.254 ms
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:              Summary for target149202163224.teuthology
2018-09-05T13:20:43.833 INFO:teuthology.orchestra.run.target149202163224.stdout:              -------------
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:              Succeeded: 11 (changed=14)
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:              Failed:     3
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:              -------------
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:              Total states run:     14
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:              Total run time:    7.994 s
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:  Name: verify users - Function: salt.state - Result: Changed Started: - 13:20:16.853992 Duration: 1584.631 ms
2018-09-05T13:20:43.834 INFO:teuthology.orchestra.run.target149202163224.stdout:----------
2018-09-05T13:20:43.835 INFO:teuthology.orchestra.run.target149202163224.stdout:          ID: verify pools
2018-09-05T13:20:43.835 INFO:teuthology.orchestra.run.target149202163224.stdout:    Function: salt.state
2018-09-05T13:20:43.835 INFO:teuthology.orchestra.run.target149202163224.stdout:      Result: False
2018-09-05T13:20:43.835 INFO:teuthology.orchestra.run.target149202163224.stdout:     Comment: Run failed on minions: target149202163224.teuthology
2018-09-05T13:20:43.835 INFO:teuthology.orchestra.run.target149202163224.stdout:     Started: 13:20:18.438923
2018-09-05T13:20:43.835 INFO:teuthology.orchestra.run.target149202163224.stdout:    Duration: 780.409 ms
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:     Changes:
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:              target149202163224.teuthology:
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:                Name: rados lspools | grep '^cloud-images$' >/dev/null - Function: cmd.run - Result: Changed Started: - 13:20:18.847147 Duration: 104.929 ms
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:              ----------
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:                        ID: verify cloud-volumes exists
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Function: cmd.run
2018-09-05T13:20:43.836 INFO:teuthology.orchestra.run.target149202163224.stdout:                      Name: rados lspools | grep '^cloud-volumes$' >/dev/null
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                    Result: False
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Comment: Command "rados lspools | grep '^cloud-volumes$' >/dev/null" run
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Started: 13:20:18.952828
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Duration: 76.72 ms
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Changes:
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                            ----------
2018-09-05T13:20:43.837 INFO:teuthology.orchestra.run.target149202163224.stdout:                            pid:
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:                                56981
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:                            retcode:
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:                                1
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stderr:
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stdout:
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:              ----------
2018-09-05T13:20:43.838 INFO:teuthology.orchestra.run.target149202163224.stdout:                        ID: verify cloud-backups exists
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Function: cmd.run
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                      Name: rados lspools | grep '^cloud-backups$' >/dev/null
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                    Result: False
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Comment: Command "rados lspools | grep '^cloud-backups$' >/dev/null" run
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Started: 13:20:19.029960
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Duration: 87.481 ms
2018-09-05T13:20:43.839 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Changes:
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                            ----------
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                            pid:
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                                57005
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                            retcode:
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                                1
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stderr:
2018-09-05T13:20:43.840 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stdout:
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:              ----------
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:                        ID: verify cloud-vms exists
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Function: cmd.run
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:                      Name: rados lspools | grep '^cloud-vms$' >/dev/null
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:                    Result: False
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Comment: Command "rados lspools | grep '^cloud-vms$' >/dev/null" run
2018-09-05T13:20:43.841 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Started: 13:20:19.117939
2018-09-05T13:20:43.842 INFO:teuthology.orchestra.run.target149202163224.stdout:                  Duration: 80.235 ms
2018-09-05T13:20:43.842 INFO:teuthology.orchestra.run.target149202163224.stdout:                   Changes:
2018-09-05T13:20:43.842 INFO:teuthology.orchestra.run.target149202163224.stdout:                            ----------
2018-09-05T13:20:43.842 INFO:teuthology.orchestra.run.target149202163224.stdout:                            pid:
2018-09-05T13:20:43.842 INFO:teuthology.orchestra.run.target149202163224.stdout:                                57029
2018-09-05T13:20:43.842 INFO:teuthology.orchestra.run.target149202163224.stdout:                            retcode:
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:                                1
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stderr:
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:                            stdout:
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:              Summary for target149202163224.teuthology
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:              ------------
2018-09-05T13:20:43.843 INFO:teuthology.orchestra.run.target149202163224.stdout:              Succeeded: 1 (changed=4)
2018-09-05T13:20:43.844 INFO:teuthology.orchestra.run.target149202163224.stdout:              Failed:    3
2018-09-05T13:20:43.844 INFO:teuthology.orchestra.run.target149202163224.stdout:              ------------
2018-09-05T13:20:43.844 INFO:teuthology.orchestra.run.target149202163224.stdout:              Total states run:     4
2018-09-05T13:20:43.844 INFO:teuthology.orchestra.run.target149202163224.stdout:              Total run time: 349.365 ms

Is that expected? Should I pre-create the non-prefixed pools as well?

Note: this is not the test ran via the "susebot" comment(s) above. I ran it in a different teuthology instance. The whole log is: http://149.202.174.223/ubuntu-2018-09-05_12:51:14-suse:tier1-ses6---basic-openstack/60/teuthology.log

@tserong
Copy link
Member Author

tserong commented Sep 5, 2018

Yeah, we need to create the non-prefixed pools too; srv/salt/ceph/functests/1node/openstack/init.sls tries to do both prefixed and non-prefixed pools, to ensure both paths succeed.

@smithfarm
Copy link
Contributor

smithfarm commented Sep 5, 2018

Note: once the openstack functests are passing, I have a SUSE/ceph.git wip branch - https://github.com/SUSE/ceph/tree/wip-qa-openstack - that needs to be merged together with this PR.

Signed-off-by: Nathan Cutler <ncutler@suse.com>
@smithfarm
Copy link
Contributor

2018-09-05T15:19:27.507 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: make sure ceph cluster is healthy - Function: salt.state - Result: Changed Started: - 15:18:44.329511 Duration: 6933.875 ms
2018-09-05T15:19:27.507 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: apply ceph.openstack - Function: salt.state - Result: Changed Started: - 15:18:51.263761 Duration: 8151.888 ms
2018-09-05T15:19:27.507 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: verify users - Function: salt.state - Result: Changed Started: - 15:18:59.416068 Duration: 1507.183 ms
2018-09-05T15:19:27.507 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: verify pools - Function: salt.state - Result: Changed Started: - 15:19:00.923578 Duration: 725.534 ms
2018-09-05T15:19:27.507 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: clean environment at end - Function: salt.state - Result: Changed Started: - 15:19:01.649328 Duration: 7232.518 ms
2018-09-05T15:19:27.507 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: apply ceph.openstack (prefix=smoketest) - Function: salt.state - Result: Changed Started: - 15:19:08.882399 Duration: 8776.506 ms
2018-09-05T15:19:27.508 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: verify users (prefix=smoketest) - Function: salt.state - Result: Changed Started: - 15:19:17.659228 Duration: 1498.152 ms
2018-09-05T15:19:27.508 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: verify pools (prefix=smoketest) - Function: salt.state - Result: Changed Started: - 15:19:19.157859 Duration: 765.682 ms
2018-09-05T15:19:27.508 INFO:teuthology.orchestra.run.target149202171022.stdout:  Name: clean environment at end (prefix=smoketest) - Function: salt.state - Result: Changed Started: - 15:19:19.924144 Duration: 7136.902 ms
2018-09-05T15:19:27.508 INFO:teuthology.orchestra.run.target149202171022.stdout:
2018-09-05T15:19:27.508 INFO:teuthology.orchestra.run.target149202171022.stdout:Summary for target149202171022.teuthology_master
2018-09-05T15:19:27.509 INFO:teuthology.orchestra.run.target149202171022.stdout:-------------
2018-09-05T15:19:27.509 INFO:teuthology.orchestra.run.target149202171022.stdout:Succeeded: 80 (changed=79)
2018-09-05T15:19:27.509 INFO:teuthology.orchestra.run.target149202171022.stdout:Failed:     0
2018-09-05T15:19:27.509 INFO:teuthology.orchestra.run.target149202171022.stdout:-------------
2018-09-05T15:19:27.509 INFO:teuthology.orchestra.run.target149202171022.stdout:Total states run:     80
2018-09-05T15:19:27.509 INFO:teuthology.orchestra.run.target149202171022.stdout:Total run time:  193.978 s

Test result: PASS

@smithfarm smithfarm merged commit 8867ae9 into master Sep 5, 2018
@smithfarm smithfarm changed the title qa: don't initially delete openstack pools qa: add openstack functional test to CI Sep 5, 2018
@smithfarm
Copy link
Contributor

backport: #1354

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants