Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize preferred spreading for hostname topology #89487

Merged
merged 2 commits into from Mar 30, 2020

Conversation

alculquicondor
Copy link
Member

@alculquicondor alculquicondor commented Mar 25, 2020

What type of PR is this?

/kind feature

What this PR does / why we need it:

Optimize spreading when using kubernetes.io/hostname as a topology key. We avoid calculating per-node counters during PreScore and do it in Score instead. This is faster because there are less map insertions and accesses.

default spreading

BenchmarkTestSelectorSpreadPriority/100nodes-56         	    5161	    226371 ns/op
BenchmarkTestSelectorSpreadPriority/1000nodes-56        	     788	   1516032 ns/op

master

BenchmarkTestDefaultEvenPodsSpreadPriority/100nodes-56         	    2168	    489570 ns/op
BenchmarkTestDefaultEvenPodsSpreadPriority/1000nodes-56        	     378	   3022404 ns/op

this PR

BenchmarkTestDefaultEvenPodsSpreadPriority/100nodes-56         	    3805	    329776 ns/op
BenchmarkTestDefaultEvenPodsSpreadPriority/1000nodes-56        	     631	   1858747 ns/op

This makes the implementation of default spreading with "topology spreading" closer to the default spreading:

  • 0.3ms for 100 nodes, 1,000 pods (1.4x from default spreading)
  • 1.8ms for 1,000 nodes, 10,000 pods (1.2x from default spreading)

Which issue(s) this PR fixes:

Part of #84936

Special notes for your reviewer:

The first commit contains the optimization.
The second commit updates the benchmark to reflect scheduler code, which parallelizes score calculation.

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/feature Categorizes issue or PR as related to a new feature. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 25, 2020
@alculquicondor
Copy link
Member Author

/priority important-longterm
/assign @Huang-Wei

cc @ahg-g

@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 25, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alculquicondor

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 25, 2020
Copy link
Member

@Huang-Wei Huang-Wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @alculquicondor ! Just some nits, otherwise LGTM.

@@ -82,3 +82,17 @@ func filterTopologySpreadConstraints(constraints []v1.TopologySpreadConstraint,
}
return result, nil
}

func countPodsMatchSelector(pods []*v1.Pod, selector labels.Selector, ns string) int {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: change int to int64 so we don't need to add explicit int64(num) conversion later.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I plan to reuse this in Filter, which uses int32 for whatever reason. int seems more portable.

@@ -104,7 +107,7 @@ func (pl *PodTopologySpread) PreScore(
}

state := &preScoreState{
NodeNameSet: sets.String{},
NodeNameSet: make(sets.String, len(filteredNodes)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for curiosity: how much percentage does it help in the performance boost?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark for 100 nodes remains at 0.3ms and goes up to 2.1ms for 1000 nodes.
The profile shows 10% CPU usage from sets.String.Insert, dominated by growing the hash table, which is not negligible.
I think it's fair to assume that if a user sets spreading, they expect all nodes to have the topology keys.

gotList := make(framework.NodeScoreList, len(filteredNodes))
scoreNode := func(i int) {
n := filteredNodes[i]
score, _ := plugin.Score(context.Background(), state, pod, n.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instantiating context.Background() everytime looks odd, cannot we reuse the outer ctx? Check out wait.UntilWithContext:

func UntilWithContext(ctx context.Context, f func(context.Context), period time.Duration) {

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can... I just missed it 😅

This is closer to what happens in the core scheduler

Signed-off-by: Aldo Culquicondor <acondor@google.com>
@alculquicondor
Copy link
Member Author

/retest

@alculquicondor
Copy link
Member Author

/sig scheduling

@Huang-Wei
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 27, 2020
@alculquicondor
Copy link
Member Author

/remove-sig scheduling

@k8s-ci-robot k8s-ci-robot removed the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Mar 30, 2020
@alculquicondor
Copy link
Member Author

/sig scheduling

@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 30, 2020
@alculquicondor
Copy link
Member Author

/retest

@k8s-ci-robot k8s-ci-robot merged commit 59c66da into kubernetes:master Mar 30, 2020
@k8s-ci-robot k8s-ci-robot added this to the v1.19 milestone Mar 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note-none Denotes a PR that doesn't merit a release note. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants