New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the existence check before requeueing node which failed in updateCIDRsAllocation #98679
Conversation
|
@Sophichia: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
Welcome @Sophichia! |
|
Hi @Sophichia. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Sophichia The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/assign |
|
/ok-to-test |
| @@ -192,8 +192,10 @@ func (r *rangeAllocator) worker(stopChan <-chan struct{}) { | |||
| return | |||
| } | |||
| if err := r.updateCIDRsAllocation(workItem); err != nil { | |||
| // Requeue the failed node for update again. | |||
| r.nodeCIDRUpdateChannel <- workItem | |||
| if !apierrors.IsNotFound(err) { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking that updateCIDRsAllocation should have a second return value shouldRetry bool as opposed to inspecting the error here. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On second thought, I think if node is not found in updateCIDRsAllocation, we should attempt to release the cidr similar to what is done on line 385 - 392:
|
/triage accepted |
|
@Sophichia: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
|
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
|
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
It checks whether the node is still existing before requeueing it into the next round of
updateCIDRsAllocation.If the
range_allocatordoesn't have this check, it may have a possibility that keeps requeueingupdateCIDRsAllocationfor a node that is already be removed.Which issue(s) this PR fixes:
In one scale environment, we hit the issue that one of the control plane nodes almost takes up 100% usage.
The process that eats most CPU is
kube-controller-managerWhen checking the log for
kube-controller-managerpod deployed on this control plane node, it keeps printing outFrom the above log, we can find the
range_allocatortries to doupdateCIDRsAllocationevery other 20 microseconds. However, theworker-node-g8wd4is actually not existing anymore in this cluster now.The workaround for this issue is just to restart this pod.
Does this PR introduce a user-facing change?:
Nope.