-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kuttl] Add test for OVN db cluster pod deletions #283
[Kuttl] Add test for OVN db cluster pod deletions #283
Conversation
9bbcfb8
to
d4240a7
Compare
cec2fbe
to
4a8d1f6
Compare
@booxter thanks for the great points! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is almost ready. Just a minor cleanup of the script logic, but otherwise this is great!
9121329
to
d097725
Compare
/lgtm |
/hold waiting for post-beta floodgates to open |
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One small idea, but don't need to be added in this patch, and maybe don't makes sense at all, that's also fine :) Maybe before 05-delete-pods step You could also add step to scale down to 1 replica (or 2), assert that this went fine and then do the delete-pods step. Please let me know what do You think about it.
In overall this patch looks good for me. Thx for it. Great work :)
test $? -eq 0 | ||
- script: | | ||
$OVN_KUTTL_DIR/../common/scripts/check_cluster_status.sh sb 3 | ||
test $? -eq 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before cleanup in next step we should also have scenario for scale down. 3 -> 1 and ensure cluster is healthy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this after the deletes.
@@ -0,0 +1,9 @@ | |||
apiVersion: kuttl.dev/v1beta1 | |||
kind: TestAssert |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
04-assert.yaml and 06-assert.yaml looks same so can have single copy and have a symlink for other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@slawqo yes I think it's a good idea to validate that delete pod in a single member cluster is also working. May be included here or as a follow-up. |
/hold cancel |
|
||
# arguments: db-type: {nb, sb}, num-pods | ||
# Check arguments | ||
if [ $# -lt 2 ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@booxter what do you say about those "echo's" in the script?
I added them so if it will fail there will be some info in the logs.
But I'm not sure if it's needed.
Build failed (check pipeline). Post https://review.rdoproject.org/zuul/buildset/38f2b396e8b34d198e52286d6cfc7bf1 ❌ openstack-k8s-operators-content-provider FAILURE in 12m 31s |
recheck |
d763852
to
a8cbbb7
Compare
test steps: 1. stand up a 3 replicas ovn db cluster (both nb & sb). 2. confirm pods are all up. 3. confirm that the pods established the mesh. 4. delete all pods. 5. check that new pods are respawned. 6. check that they re-established the mesh with the correct number of pods. related Jira: OSPRH-6135
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: brfrenkel, slawqo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
0bb142a
into
openstack-k8s-operators:main
/lgtm |
test steps:
and all connections are good.
related Jira: OSPRH-6135