Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dra scheduler: fall back to SSA for PodSchedulingContext updates #120534

Merged
merged 1 commit into from Oct 23, 2023

Conversation

pohly
Copy link
Contributor

@pohly pohly commented Sep 8, 2023

What type of PR is this?

/kind feature

What this PR does / why we need it:

During scheduler_perf testing, roughly 10% of the PodSchedulingContext update operations failed with a conflict error. Using SSA would avoid that, but performance measurements showed that this causes a considerable slowdown (primarily because of the slower encoding with JSON instead of protobuf, but also because server-side processing is more expensive).

Therefore a normal update is tried first and SSA only gets used when there has been a conflict. Using SSA in that case instead of giving up outright is better because it avoids another scheduling attempt.

Which issue(s) this PR fixes:

Related-to: #120502

Special notes for your reviewer:

On my machine, "stress" shows that the new test is stable:

45s: 236 runs so far, 1 failures (0.42%)

That one failure seems unrelated:

        found unexpected goroutines:
        [Goroutine 5614 in state sleep, with time.Sleep on top of the stack:
        goroutine 5614 [sleep]:
        time.Sleep(0x2aa6db101)
                /nvme/gopath/go-1.21.0/src/runtime/time.go:195 +0x125
        k8s.io/client-go/tools/events.(*eventBroadcasterImpl).attemptRecording(0xc00636a630, 0xc002f5a780)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/events/event_broadcaster.go:221 +0x77
        k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/events/event_broadcaster.go:200 +0x3a
        created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink in goroutine 5021
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/events/event_broadcaster.go:173 +0xdb
        ]
 >
FAIL    k8s.io/kubernetes/test/integration/scheduler    10.704s

This is a known issue (#115514) and it should be fixed. I double-checked that Shutdown() is called. Needs to be investigated if it occurs in practice. I got rid of this goroutine leak by not starting the event recorder in the first place - it's not needed.

Does this PR introduce a user-facing change?

dra: the scheduler plugin avoids additional scheduling attempts in some cases by falling back to SSA after a conflict

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/issues/3063

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 8, 2023
@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 8, 2023
@pohly
Copy link
Contributor Author

pohly commented Sep 9, 2023

found unexpected goroutines:

This annoys me 😠

Perhaps it occurs more often in this new test because the scheduler keeps trying to schedule the pod right until everything is getting shut down. Let me try whether it helps to first delete the pod, give remaining events a bit time to be delivered, and then shut down.

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 9, 2023
@pohly pohly force-pushed the dra-scheduler-ssa-as-fallback branch from a3727c7 to 7b2ab3d Compare September 9, 2023 18:50
@pohly
Copy link
Contributor Author

pohly commented Sep 11, 2023

found unexpected goroutines:

This annoys me 😠

Perhaps it occurs more often in this new test because the scheduler keeps trying to schedule the pod right until everything is getting shut down.

Fixed by not starting the problematic event sink.

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 11, 2023
// - Triggering this particular race is harder in E2E testing
// and harder to verify (needs apiserver metrics and there's
// no standard API for those).
func TestPodSchedulingContextSSA(t *testing.T) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Triggering the race here isn't trivial either. I'd appreciate feedback on the current approach. In parallel I'll investigate whether error injection can be used to trigger the fallback.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed a second commit that does error injection at the http.RoundTripper level.

@aojea: you've done some work on integration testing and apiserver setup. Do my changes look okay to you?

/hold

I want to squash before merging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can, and I prefer the new approach. I just haven't squashed yet to simplify before/after comparisons (if anyone cares).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scratch that - I also fixes some other issues, squashed and force-pushed as a single commit.

Copy link
Member

@aojea aojea Sep 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checking now sorry, @pohly I think that you can do something like

diff --git a/test/integration/framework/test_server.go b/test/integration/framework/test_server.go
index 6b15b09b2d8..d288ff73ca8 100644
--- a/test/integration/framework/test_server.go
+++ b/test/integration/framework/test_server.go
@@ -52,8 +52,9 @@ AwEHoUQDQgAEH6cuzP8XuD5wal6wf9M6xDljTOPLX2i8uIp/C/ASqiIGUeeKQtX0
 
 // TestServerSetup holds configuration information for a kube-apiserver test server.
 type TestServerSetup struct {
-       ModifyServerRunOptions func(*options.ServerRunOptions)
-       ModifyServerConfig     func(*controlplane.Config)
+       ModifyServerRunOptions   func(*options.ServerRunOptions)
+       ModifyServerConfig       func(*controlplane.Config)
+       ModifyServerClientConfig func(*rest.Config)
 }
 
 type TearDownFunc func()
@@ -223,6 +224,7 @@ func StartTestServer(ctx context.Context, t testing.TB, setup TestServerSetup) (
                t.Fatal(err)
        }
 
+       setup.ModifyServerClientConfig(kubeAPIServerClientConfig)
        kubeAPIServerClient, err := client.NewForConfig(kubeAPIServerClientConfig)
        if err != nil {
                t.Fatal(err)

and then use it like

	_, kubeConfig, tearDownFn := framework.StartTestServer(ctx, t, framework.TestServerSetup{
		ModifyServerRunOptions: func(opts *options.ServerRunOptions) {
			// Disable ServiceAccount admission plugin as we don't have serviceaccount controller running.
			opts.Admission.GenericAdmission.DisablePlugins = []string{"ServiceAccount", "TaintNodesByCondition"}
			opts.APIEnablement.RuntimeConfig.Set("networking.k8s.io/v1alpha1=true")
		},
		ModifyServerClientConfig: func(c *clientrest.Config) {
			c.Wrap(func(rt http.RoundTripper) http.RoundTripper {
				return roundTripperFunc(func(req *http.Request) (*http.Response, error) {
					authorizationHeaderValues.append(req.Header.Values("Authorization"))
					return rt.RoundTrip(req)
				})
			})
		},
	})
	defer tearDownFn()

it seams cleaner and better so we can have all the client mutations sinde the constructor.

Another options is that you build the client directly

StartTestServer returns the rest.Config, is not that much easier?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestions. I removed all changes from test/integration/scheduler/scheduler_test.go.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 11, 2023
@bart0sh bart0sh added this to Triage in SIG Node PR Triage Sep 11, 2023
@@ -468,17 +495,23 @@ func UpdateNodeStatus(cs clientset.Interface, node *v1.Node) error {
func InitTestAPIServer(t *testing.T, nsPrefix string, admission admission.Interface) *TestContext {
_, ctx := ktesting.NewTestContext(t)
ctx, cancel := context.WithCancel(ctx)
testCtx := TestContext{Ctx: ctx}
testCtx := &TestContext{Ctx: ctx}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My goal was to not change the prototype of InitTestAPIServer.

Therefore the wrapper can be set after the TestContext and all servers have been created.

@aojea
Copy link
Member

aojea commented Sep 11, 2023

/assign @aojea

@sanposhiho
Copy link
Member

/assign


// RoundTrip, if set, will be called for every HTTP request going to the apiserver.
// It can be used for error injection.
RoundTrip func(transport http.RoundTripper, req *http.Request) (*http.Response, error)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

StartTestServer(ctx context.Context, t testing.TB, setup TestServerSetup) (client.Interface, *rest.Config, TearDownFunc) {

returns a rest.Config, you can copy it and create a new clientset for that with your roundtripper

client, config, tearDownFn := framework.StartTestServer(ctx, t, framework.TestServerSetup{})
config.Wrap(func(rt http.RoundTripper) http.RoundTripper {
				return roundTripperFunc(func(req *http.Request) (*http.Response, error) {
					authorizationHeaderValues.append(req.Header.Values("Authorization"))
					return rt.RoundTrip(req)
				})
			})

  newClient, err := client.NewForConfig(config)
        if err != nil {
                t.Fatal(err)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hadn't found rest.Config.Wrap. That's indeed simpler and make it possible to limit the changes to just test/integration/util/util.go. I've switched to that - please take another look.

@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 15, 2023
@bart0sh bart0sh moved this from Triage to Needs Reviewer in SIG Node PR Triage Sep 15, 2023
During scheduler_perf testing, roughly 10% of the PodSchedulingContext update
operations failed with a conflict error. Using SSA would avoid that, but
performance measurements showed that this causes a considerable
slowdown (primarily because of the slower encoding with JSON instead of
protobuf, but also because server-side processing is more expensive).

Therefore a normal update is tried first and SSA only gets used when there has
been a conflict. Using SSA in that case instead of giving up outright is better
because it avoids another scheduling attempt.
@pohly pohly force-pushed the dra-scheduler-ssa-as-fallback branch from 4f2471f to 7cac1dc Compare September 15, 2023 13:05
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 15, 2023
@pohly
Copy link
Contributor Author

pohly commented Sep 15, 2023

/retest

1 similar comment
@pohly
Copy link
Contributor Author

pohly commented Sep 15, 2023

/retest

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 18, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 161c352c8fbf6248f634fb57cf0fbaffff45b7ac

@@ -187,6 +188,41 @@ func (p *podSchedulingState) publish(ctx context.Context, pod *v1.Pod, clientset
logger.V(5).Info("Updating PodSchedulingContext", "podSchedulingCtx", klog.KObj(schedulingCtx))
}
_, err = clientset.ResourceV1alpha2().PodSchedulingContexts(schedulingCtx.Namespace).Update(ctx, schedulingCtx, metav1.UpdateOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you considered a regular patch? Is that any faster than SSA?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't because the guidance is to use SSA. I've shared CPU and memory profiles with @apelisse, he is looking into the performance aspect.

@alculquicondor
Copy link
Member

Overall, I'm ok with this change, but I wonder if somehow we can take these updates out of the critical path. Do they need to happen in the scheduling cycle?
I'm not saying that they should, but worth considering the implications, if you can.

@pohly
Copy link
Contributor Author

pohly commented Sep 19, 2023

#120502 captures the discussion I had with @Huang-Wei about doing these updates in the background. The gist is that it would be possible, but it implies extending the scheduler framework in a non-trivial way because right now the plugin cannot handle errors when they occur outside of the normal callback functions - it's not clear yet how to do that.

We agreed to first do this PR and then come back to the topic if it turns out to be a problem in practice (= making it more important) and/or when there's time for further investigations.

@pohly
Copy link
Contributor Author

pohly commented Sep 28, 2023

/hold cancel

The hold was for squashing, which was done.

I chatted again with @apelisse about whether it would make sense to wait for SSA performance enhancements and he said that those will take more time. Let's move ahead with this PR as an interim solution.

@alculquicondor : okay for approval?

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 28, 2023
@alculquicondor
Copy link
Member

Was there any conclusion about moving these calls out of Reserve?

@pohly
Copy link
Contributor Author

pohly commented Sep 28, 2023

The conclusion was "would be nice, but not absolutely required for beta" (but I intend to work on it anyway) and "not something that needs to be solved in this PR".

@alculquicondor
Copy link
Member

I need to look at the KEP, but I think whatever we can do to avoid impacting workloads that don't use DRA should be high in the priority list or even be a blocker for beta.

@pacoxu pacoxu moved this from Needs Reviewer to Needs Approver in SIG Node PR Triage Oct 16, 2023
Copy link
Member

@Huang-Wei Huang-Wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Huang-Wei, pohly

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 23, 2023
@k8s-ci-robot k8s-ci-robot merged commit 5a4e792 into kubernetes:master Oct 23, 2023
18 checks passed
SIG Node PR Triage automation moved this from Needs Approver to Done Oct 23, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.29 milestone Oct 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

7 participants