New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1835497 : Prevent sb/nb db readiness probe from being zombie #652
Bug 1835497 : Prevent sb/nb db readiness probe from being zombie #652
Conversation
Current implementation of sb/nb db containers readiness probe uses exec to call the ovn-appctl to check the status of the sb/nb db raft cluster. This exec call will be executed by another exec call that is used to execute the probe. If sb/nb db raft cluster is busy and readiness probe is fired, it will be stuck till the raft cluster respond with success or failure. It might take time in seconds (even more than `periodSeconds` time). Next readiness probe executes and kills the parent that makes the child processes zombie and they never reap. With single readiness probe failure it creates 3 zombie threads. With higher scale, raft cluster can be busy for long duration (10 to 100s depending on the load) and readiness probe will fail more frequently and that can increase the zombie threads and slowly eat up the pthread_thread quota and newly scheduled or rebooted pods on the nodes might not be able to create threads and fails. In this reported bug, sb raft cluster was not in healthy state and CNO endup rebooting the node, but it fails to boot because it was not allowed to create anymore threads, and this raft cluster never recovered. Signed-off-by: Anil Vishnoi <avishnoi@redhat.com>
/retest |
Good catch. Why such a low timeout? Can you think of a better readiness probe? |
@squeed With my experiment with horizontal scale test (increasing number of nodes), i have seen that the raft cluster can get busy for 2-10 seconds range for 200 nodes, so there is good chances that the ovn-appctl might be stuck for that duration and more frequently. So rather than ovn-appctl stuck for longer time, i think leveraging the If you have any thoughts to improve it, please share, we can evaluate that as well. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dcbw, vishnoianil The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/bugzilla refresh |
@dcbw: No Bugzilla bug is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
14 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
@vishnoianil: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
Current implementation of sb/nb db containers readiness probe uses
exec to call the ovn-appctl to check the status of the sb/nb db raft
cluster. This exec call will be executed by another exec call that is
used to execute the probe. If sb/nb db raft cluster is busy and readiness
probe is fired, it will be stuck till the raft cluster respond with
success or failure. It might take time in seconds (even more than
periodSeconds
time). Next readiness probe executes and kills the parentthat makes the child processes zombie and they never reap. With single
readiness probe failure it creates 3 zombie threads.
With higher scale, raft cluster can be busy for long duration
(10 to 100s depending on the load) and readiness probe will fail
more frequently and that can increase the zombie threads and slowly
eat up the pthread_thread quota and newly scheduled or rebooted pods
on the nodes might not be able to create threads and fails.
In this reported bug, sb raft cluster was not in healthy state and
CNO endup rebooting the node, but it fails to boot because it was not
allowed to create anymore threads, and this raft cluster never recovered.
Signed-off-by: Anil Vishnoi avishnoi@redhat.com