Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a feature to the scheduler to score fewer than all nodes in every scheduling cycle #66733

Merged
merged 4 commits into from Aug 18, 2018

Conversation

@bsalamat
Copy link
Member

@bsalamat bsalamat commented Jul 28, 2018

What this PR does / why we need it:
Today, the scheduler scores all the nodes in the cluster in every scheduling cycle (every time a posd is attempted). This feature implements a mechanism in the scheduler that allows scoring fewer than all nodes in the cluster. The scheduler stops searching for more nodes once the configured number of feasible nodes are found. This can help improve the scheduler's performance in large clusters (several hundred nodes and larger).
This PR also adds a new structure to the scheduler's cache, called NodeTree, that allows scheduler to iterate over various nodes in different zones in a cluster. This is needed to avoid scoring the same set of nodes in every scheduling cycle.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #66627

Special notes for your reviewer:
This is a large PR, but broken into a few logical commits. Reviewing would be easier if you review by commits.

Release note:

Add a feature to the scheduler to score fewer than all nodes in every scheduling cycle. This can improve performance of the scheduler in large clusters.
@k8s-github-robot
Copy link
Contributor

@k8s-github-robot k8s-github-robot commented Jul 28, 2018

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@bsalamat

Pull Request Labels
  • sig/scheduling: Pull Request will be escalated to these SIGs if needed.
  • priority/important-soon: Escalate to the pull request owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.
  • kind/feature: New functionality.
Help
@bsalamat bsalamat force-pushed the bsalamat:subset_nodes branch from 2565a7b to cd0c0ad Jul 28, 2018
allNodes := int32(g.cache.NodeTree().NumNodes)
numNodesToFind := g.numFeasibleNodesToFind(allNodes)
numNodesProcessed := int32(0)
for numNodesProcessed < allNodes {

This comment has been minimized.

@wgliang

wgliang Jul 28, 2018
Member

Why not do it all at once until filteredLen >= numNodesToFind?

This comment has been minimized.

@bsalamat

bsalamat Jul 28, 2018
Author Member

once you send work to parallelize you cannot stop in the middle.

This comment has been minimized.

@wgliang

wgliang Jul 28, 2018
Member

Adding a parameter to parallelize to control is a solution, or just redefining an interface to implement this functionality.

This comment has been minimized.

@bsalamat

bsalamat Jul 28, 2018
Author Member

Parallelize is a part of a client-go library. We cannot change its parameters, but I agree that having another Parallelize function, for example, ParallelizeUntil(..., condition) would be useful. That should be done as a separate PR though. Do you think you can add one that can be used here?

This comment has been minimized.

@wgliang

wgliang Jul 28, 2018
Member

Ok, when this PR is merged I will help implement it. :)

@bsalamat bsalamat force-pushed the bsalamat:subset_nodes branch from cd0c0ad to c7826f3 Jul 28, 2018
@k8s-ci-robot k8s-ci-robot added size/XXL and removed size/XL labels Jul 28, 2018
@bsalamat bsalamat force-pushed the bsalamat:subset_nodes branch 3 times, most recently from 4f4e26c to 3875e5b Jul 28, 2018
@@ -45,6 +45,10 @@ import (
"k8s.io/kubernetes/pkg/scheduler/volumebinder"
)

const (
minFeasibleNodesToFind = 20

This comment has been minimized.

@wgliang

wgliang Jul 28, 2018
Member

Why is it 20, I think we need to have relevant comments to explain.

This comment has been minimized.

@bsalamat

bsalamat Jul 28, 2018
Author Member

20 is an arbitrary value. I added a comment to explain.

@@ -336,6 +341,20 @@ func (g *genericScheduler) getLowerPriorityNominatedPods(pod *v1.Pod, nodeName s
return lowerPriorityPods
}

// numFeasibleNodesToFind returns the number of feasible nodes that once found, the scheduler stops
// its search for more feasible nodes.
func (g *genericScheduler) numFeasibleNodesToFind(allNodes int32) int32 {

This comment has been minimized.

@wgliang

wgliang Jul 28, 2018
Member

numAllNodes will be better.

@k8s-ci-robot k8s-ci-robot removed the lgtm label Aug 17, 2018
@jimangel
Copy link
Member

@jimangel jimangel commented Aug 17, 2018

@bsalamat does this need/have a PR against the 1.12 docs branch? Looks like a major change. Thanks!

@@ -68,6 +67,8 @@ func (o *DeprecatedOptions) AddFlags(fs *pflag.FlagSet, cfg *componentconfig.Kub
fs.MarkDeprecated("hard-pod-affinity-symmetric-weight", "This option was moved to the policy configuration file")
fs.StringVar(&cfg.FailureDomains, "failure-domains", cfg.FailureDomains, "Indicate the \"all topologies\" set for an empty topologyKey when it's used for PreferredDuringScheduling pod anti-affinity.")
fs.MarkDeprecated("failure-domains", "Doesn't have any effect. Will be removed in future version.")
fs.Int32Var(&cfg.PercentageOfNodesToScore, "percentage-of-nodes-to-score", cfg.PercentageOfNodesToScore,

This comment has been minimized.

@sttts

sttts Aug 17, 2018
Contributor

Let's not add anything to the deprecated flags.

This comment has been minimized.

@bsalamat

bsalamat Aug 17, 2018
Author Member

Since the componentconfig is still alpha, these options are not quite "deprecated"! 😉
Anyway, I removed it.

@sttts
Copy link
Contributor

@sttts sttts commented Aug 17, 2018

One comment about the flag. Otherwise sgtm.

@bsalamat bsalamat force-pushed the bsalamat:subset_nodes branch from 95a6273 to 6fbfaa9 Aug 17, 2018
@bsalamat
Copy link
Member Author

@bsalamat bsalamat commented Aug 17, 2018

@jimangel Yes, I will write/update docs.

@sttts
Copy link
Contributor

@sttts sttts commented Aug 17, 2018

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Aug 17, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bsalamat, fejta, k82cn, sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@bsalamat bsalamat force-pushed the bsalamat:subset_nodes branch from 6fbfaa9 to 2860743 Aug 17, 2018
@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Aug 17, 2018

New changes are detected. LGTM label has been removed.

@k8s-ci-robot k8s-ci-robot removed the lgtm label Aug 17, 2018
@bsalamat bsalamat added the lgtm label Aug 17, 2018
@k8s-github-robot
Copy link
Contributor

@k8s-github-robot k8s-github-robot commented Aug 17, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link
Contributor

@k8s-github-robot k8s-github-robot commented Aug 18, 2018

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 8c1bfeb into kubernetes:master Aug 18, 2018
17 of 18 checks passed
17 of 18 checks passed
Submit Queue Required Github CI test is not green: pull-kubernetes-kubemark-e2e-gce-big
Details
cla/linuxfoundation bsalamat authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
k8s-github-robot pushed a commit that referenced this pull request Sep 4, 2018
Kubernetes Submit Queue
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Not split nodes when searching for nodes but doing it all at once

**What this PR does / why we need it**:
Not split nodes when searching for nodes but doing it all at once.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
@bsalamat 

This is a follow up PR of #66733.

#66733 (comment)

**Release note**:

```release-note
Not split nodes when searching for nodes but doing it all at once.
```
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Sep 4, 2018
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Not split nodes when searching for nodes but doing it all at once

**What this PR does / why we need it**:
Not split nodes when searching for nodes but doing it all at once.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
@bsalamat

This is a follow up PR of #66733.

kubernetes/kubernetes#66733 (comment)

**Release note**:

```release-note
Not split nodes when searching for nodes but doing it all at once.
```

Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
sttts pushed a commit to sttts/client-go that referenced this pull request Sep 5, 2018
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Not split nodes when searching for nodes but doing it all at once

**What this PR does / why we need it**:
Not split nodes when searching for nodes but doing it all at once.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
@bsalamat

This is a follow up PR of #66733.

kubernetes/kubernetes#66733 (comment)

**Release note**:

```release-note
Not split nodes when searching for nodes but doing it all at once.
```

Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
sttts pushed a commit to sttts/client-go that referenced this pull request Sep 5, 2018
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Not split nodes when searching for nodes but doing it all at once

**What this PR does / why we need it**:
Not split nodes when searching for nodes but doing it all at once.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
@bsalamat

This is a follow up PR of #66733.

kubernetes/kubernetes#66733 (comment)

**Release note**:

```release-note
Not split nodes when searching for nodes but doing it all at once.
```

Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Sep 6, 2018
Automatic merge from submit-queue (batch tested with PRs 67555, 68196). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

Not split nodes when searching for nodes but doing it all at once

**What this PR does / why we need it**:
Not split nodes when searching for nodes but doing it all at once.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
@bsalamat

This is a follow up PR of #66733.

kubernetes/kubernetes#66733 (comment)

**Release note**:

```release-note
Not split nodes when searching for nodes but doing it all at once.
```

Kubernetes-commit: a0b457d0e5ed54646fd01eac877efcea5be3216d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.