Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1835497 : Prevent sb/nb db readiness probe from being zombie #652

Merged

Conversation

vishnoianil
Copy link
Contributor

Current implementation of sb/nb db containers readiness probe uses
exec to call the ovn-appctl to check the status of the sb/nb db raft
cluster. This exec call will be executed by another exec call that is
used to execute the probe. If sb/nb db raft cluster is busy and readiness
probe is fired, it will be stuck till the raft cluster respond with
success or failure. It might take time in seconds (even more than
periodSeconds time). Next readiness probe executes and kills the parent
that makes the child processes zombie and they never reap. With single
readiness probe failure it creates 3 zombie threads.

With higher scale, raft cluster can be busy for long duration
(10 to 100s depending on the load) and readiness probe will fail
more frequently and that can increase the zombie threads and slowly
eat up the pthread_thread quota and newly scheduled or rebooted pods
on the nodes might not be able to create threads and fails.

In this reported bug, sb raft cluster was not in healthy state and
CNO endup rebooting the node, but it fails to boot because it was not
allowed to create anymore threads, and this raft cluster never recovered.

Signed-off-by: Anil Vishnoi avishnoi@redhat.com

Current implementation of sb/nb db containers readiness probe uses
exec to call the ovn-appctl to check the status of the sb/nb db raft
cluster. This exec call will be executed by another exec call that is
used to execute the probe. If sb/nb db raft cluster is busy and readiness
probe is fired, it will be stuck till the raft cluster respond with
success or failure. It might take time in seconds (even more than
`periodSeconds` time). Next readiness probe executes and kills the parent
that makes the child processes zombie and they never reap. With single
readiness probe failure it creates 3 zombie threads.

With higher scale, raft cluster can be busy for long duration
(10 to 100s depending on the load) and readiness probe will fail
more frequently and that can increase the zombie threads and slowly
 eat up the pthread_thread quota and newly scheduled or rebooted pods
 on the nodes might not be able to create threads and fails.

In this reported bug, sb raft cluster was not in healthy state and
CNO endup rebooting the node, but it fails to boot because it was not
allowed to create anymore threads, and this raft cluster never recovered.

Signed-off-by: Anil Vishnoi <avishnoi@redhat.com>
@vishnoianil
Copy link
Contributor Author

@dcbw @abhat @squeed PTAL, thanks

@vishnoianil
Copy link
Contributor Author

/retest

@squeed
Copy link
Contributor

squeed commented May 29, 2020

Good catch.

Why such a low timeout? Can you think of a better readiness probe?

@vishnoianil
Copy link
Contributor Author

Good catch.

Why such a low timeout? Can you think of a better readiness probe?

@squeed
In normal conditions, probe (ovn-appctl) should return in millisecond time, so if it doesn't respond in 3 second that means it's busy and might be busy for longer duration. Given that the probe runs periodically with 10 second interval and we wait for 3 failures to mark the container in unhleathy state, that overall gives us 30 second to check on the cluster status and mark it unhealthy, which i believe is resonable time duration?

With my experiment with horizontal scale test (increasing number of nodes), i have seen that the raft cluster can get busy for 2-10 seconds range for 200 nodes, so there is good chances that the ovn-appctl might be stuck for that duration and more frequently. So rather than ovn-appctl stuck for longer time, i think leveraging the periodSeconds gives us fast-fail and check-again method, which i believe will give us more deterministic readiness status.

If you have any thoughts to improve it, please share, we can evaluate that as well.

@dcbw
Copy link
Member

dcbw commented Jun 1, 2020

/approve
/lgtm

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dcbw, vishnoianil

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jun 1, 2020
@dcbw
Copy link
Member

dcbw commented Jun 1, 2020

/bugzilla refresh

@openshift-ci-robot
Copy link
Contributor

@dcbw: No Bugzilla bug is referenced in the title of this pull request.
To reference a bug, add 'Bug XXX:' to the title of this pull request and request another bug refresh with /bugzilla refresh.

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dcbw
Copy link
Member

dcbw commented Jun 1, 2020

/retest

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

14 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

2 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@vishnoianil
Copy link
Contributor Author

/retest

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link
Contributor

@vishnoianil: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-vsphere 36c4a35 link /test e2e-vsphere

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 93a0dae into openshift:master Jun 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants