-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skip ha/drbd_passive test when not in TWO_NODES scenario #15324
Skip ha/drbd_passive test when not in TWO_NODES scenario #15324
Conversation
One possible solution would be to create a different schedule for the 3 cluster node scenario, but this implies a PR to this repo, as well as a change to the job settings. - don't we need a PR to add the "TWO_NODES=no" anyway? Anyway, if it passes your vr, lgtm. |
It is already there:
If memory serves, TWO_NODE=no is used by other test modules and added quite a while ago. I am just simply re-using the same setting for this particular test module. In fact, I think L47 on
Could be re-written safely as only:
But I am leaving the node checks just in case there is already a working test using |
tests/ha/filesystem.pm
Outdated
$resource = 'drbd_active'; | ||
} | ||
elsif ($tag eq 'skip_fs_test') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this duplicate of line 21 ?
And if it should be at begging because tag drbd_passive would be evaluated before this skip thus not skip.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this duplicate of line 21 ?
Not exactly the same. L21 would skip the test when not on drbd Maintenance Updates, but continue it if testing a drbd MU, such as in https://openqa.suse.de/tests/9247409#step/drbd_passive/5
L44 on the other hand is skipping the test on scenarios where the test would not work (such as in 3 node) per L47-L51 in ha/drbd_passive
.
And if it should be at begging because tag drbd_passive would be evaluated before this skip thus not skip.
Not a problem. $tag
would be one or the other, and never both: https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/lib/hacluster.pm#L458-L480
(This probably could've been implemented with get_var()
and set_var()
instead, but no idea why Loic went with a local file in the cluster nodes for these tags. Could be a candidate for a re-write in the future)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But instead of two places where the test will return there could be just one at the beginning of the if statement. Otherwise fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, got it. Let me move both exit conditions to the same line and test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New verification runs:
- Alpha Cluster on Milestone Build Validation: node 1, node 2 & support server
Cannot run VR with the drbd MU as the incident repository is gone: http://mango.qa.suse.de/tests/4893#step/iscsi_client/12
Should we merge with the new change or do I rollback 7997ebe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge this PR, you are writing skip_fs_test
tag in first commit
Also skip related ha/filesystem test as if ha/drbd_passive test module is skipped, there would be no block device on which to create the FS.
7997ebe
to
81dbecc
Compare
Currently the 3 cluster node scenario tested for Maintenance jobs is always scheduling the
ha/drbd_passive
test module, even when this particular test module is not designed to work in cluster scenarios with more than 2 nodes.As a result, whenever a MU job is triggered with a package that requires a drbd test, the 3 node scenario job will fail as it is skipped in the third node before any of the
barrier_wait()
calls. This means the other nodes, remain blocked in abarrier_wait()
call until they reach MAX_JOB_TIME and fail the whole scenario.One possible solution would be to create a different schedule for the 3 cluster node scenario, but this implies a PR to this repo, as well as a change to the job settings.
This PR instead modifies
ha/drbd_passive
so it is skipped when running in scenarios with the setting TWO_NODES=no. It will also skip the test moduleha/filesystem
that is usually scheduled right afterha/drbd_passive
, as there would be no block device on which to test the filesystem creation.P.S.: I also include here a small optimization in
ha/vg
test module to avoid having multiple calls tohacluster::read_tag
which sends commands to SUT.