New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: Thrasher: update pgp_num of all expanded pools if not yet #13367
Conversation
this fixes the failure of http://pulpito.ceph.com/kchai-2017-02-11_07:28:54-rados-wip-16091-monc-in-parallel---basic-smithi/805110/. see also #13340 . |
still has
|
98d6f20
to
b14149d
Compare
#13378 fixes the first part using hte releases/luminous.yaml convention. not sure about the other patches here? |
b14149d
to
8163201
Compare
@liewegas i will drop the first commit. the other commits address following warning:
please see the commit message for more info. https://github.com/ceph/ceph/pull/13378/files#diff-e1cc51eb4bbd70baf5f2e815a28cdcebR9 addresses the
warning. while the last three commits do this in another way. either way, we still need 75fa968, which handles
|
8163201
to
359d8b1
Compare
359d8b1
to
04220ff
Compare
otherwise wait_until_healthy will fail after timeout as seeing warning like: HEALTH_WARN pool cephfs_data pg_num 182 > pgp_num 172 Signed-off-by: Kefu Chai <kchai@redhat.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
04220ff
to
de59b51
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. fwiw i avoided the down_out_interval warning by just disabling that warning via a config option in luminous.yaml. this will avoid the problem in general, though!
…fore waiting for healthy
otherwise, the "ceph health" complains with:
all OSDs are running luminous or later but the 'require_luminous_osds'
osdmap flag is not set
"ceph.restart" task will timeout and fail at seeing this warning.
so we need to set the osdmap flag, after upgrading all OSDs. and call
"ceph.restart" again to see if the cluster is healthy or not.
Signed-off-by: Kefu Chai kchai@redhat.com