New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Activate unschedulable pods only if the node became more schedulable #70366

Merged
merged 1 commit into from Oct 30, 2018

Conversation

Projects
None yet
4 participants
@mlmhl
Contributor

mlmhl commented Oct 29, 2018

What type of PR is this?
/kind feature

What this PR does / why we need it:

This is a performance optimization for scheduler:

Move unschedulable pods to active queue only if a node's scheduling related properties updated. This PR considers node allocatable, node conditions, node taints, node labels and Node.Spec.Unschedulable as scheduling related properties.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #70316

Does this PR introduce a user-facing change?:

Scheduler only activates unschedulable pods if node's scheduling related properties change.
@wgliang

This comment has been minimized.

Member

wgliang commented Oct 29, 2018

@mlmhl please make the CI happy(seem like you need to gofmt the code).

@wgliang

This comment has been minimized.

Member

wgliang commented Oct 29, 2018

And I think this PR need a effective release note.

@bsalamat

Thanks so much for working on this so quickly. Given that nodes send updates every 10 seconds, in large clusters we receive hundreds of node updates per second. So, it is important that the checks be very quick. That's the main reason that I suggested a couple of changes to make the logic simpler. Of course, simpler logic is easier to maintain as well.

if reflect.DeepEqual(oldAllocatable, newAllocatable) {
return false
}
for resource, newValue := range newAllocatable {

This comment has been minimized.

@bsalamat

bsalamat Oct 29, 2018

Contributor

What you have done makes sense, but in order to be quicker in checking node changes and also to be conservative, I would return "true" as long allocatables are changed, no matter whether they are reduced or increased. In other words, remove this for loop and just return true.

This comment has been minimized.

@mlmhl

mlmhl Oct 30, 2018

Contributor

Done

return false
}
healthyConditions := []v1.NodeConditionType{v1.NodeReady}

This comment has been minimized.

@bsalamat

bsalamat Oct 29, 2018

Contributor

Similar to my previous comment, I would return true as long as old and new conditions are not equal. This will help reduce chances of introducing bugs in the future when new node conditions are added.

This comment has been minimized.

@mlmhl

mlmhl Oct 30, 2018

Contributor

Done

@mlmhl

This comment has been minimized.

Contributor

mlmhl commented Oct 30, 2018

@bsalamat @wgliang All comments are updated, PTAL :)

@bsalamat

/lgtm
/approve

Thanks, @mlmhl!

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Oct 30, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bsalamat, mlmhl

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@mlmhl

This comment has been minimized.

Contributor

mlmhl commented Oct 30, 2018

/test pull-kubernetes-e2e-gce-100-performance

@k8s-ci-robot k8s-ci-robot merged commit fda41d1 into kubernetes:master Oct 30, 2018

18 checks passed

cla/linuxfoundation mlmhl authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
tide In merge pool.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment