-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Activate unschedulable pods only if the node became more schedulable" #70776
Conversation
@bsalamat: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bsalamat The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks @bsalamat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks @bsalamat.
/kind bug |
/release-note-none |
We have seen high CPU usage and increased latency after the PR that we are reverting here got merged:
Reverts #70366
I was a bit worried when we started the PR, because our node updates are frequent and we spent a significant amount of time to compare node objects. For now, I am reverting the PR. We will try to find ways to optimize the algorithm further.
The main reason that we worked on the original PR was to reduce the number of times the scheduler retries pods after receiving node updates. However, in situations where there is no unschedulable pods (which is the case in our scalability tests), we only spend time comparing node object without getting anything in return. In those cases, we could have bypassed all the checks as the scheduler wouldn't have found any pods to retry.
ref/ #70708
/sig scheduling