New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dra scheduler: fall back to SSA for PodSchedulingContext updates #120534
dra scheduler: fall back to SSA for PodSchedulingContext updates #120534
Conversation
This annoys me 😠 Perhaps it occurs more often in this new test because the scheduler keeps trying to schedule the pod right until everything is getting shut down. Let me try whether it helps to first delete the pod, give remaining events a bit time to be delivered, and then shut down. /hold |
a3727c7
to
7b2ab3d
Compare
Fixed by not starting the problematic event sink. /hold cancel |
// - Triggering this particular race is harder in E2E testing | ||
// and harder to verify (needs apiserver metrics and there's | ||
// no standard API for those). | ||
func TestPodSchedulingContextSSA(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Triggering the race here isn't trivial either. I'd appreciate feedback on the current approach. In parallel I'll investigate whether error injection can be used to trigger the fallback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've pushed a second commit that does error injection at the http.RoundTripper
level.
@aojea: you've done some work on integration testing and apiserver setup. Do my changes look okay to you?
/hold
I want to squash before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can, and I prefer the new approach. I just haven't squashed yet to simplify before/after comparisons (if anyone cares).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scratch that - I also fixes some other issues, squashed and force-pushed as a single commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checking now sorry, @pohly I think that you can do something like
diff --git a/test/integration/framework/test_server.go b/test/integration/framework/test_server.go
index 6b15b09b2d8..d288ff73ca8 100644
--- a/test/integration/framework/test_server.go
+++ b/test/integration/framework/test_server.go
@@ -52,8 +52,9 @@ AwEHoUQDQgAEH6cuzP8XuD5wal6wf9M6xDljTOPLX2i8uIp/C/ASqiIGUeeKQtX0
// TestServerSetup holds configuration information for a kube-apiserver test server.
type TestServerSetup struct {
- ModifyServerRunOptions func(*options.ServerRunOptions)
- ModifyServerConfig func(*controlplane.Config)
+ ModifyServerRunOptions func(*options.ServerRunOptions)
+ ModifyServerConfig func(*controlplane.Config)
+ ModifyServerClientConfig func(*rest.Config)
}
type TearDownFunc func()
@@ -223,6 +224,7 @@ func StartTestServer(ctx context.Context, t testing.TB, setup TestServerSetup) (
t.Fatal(err)
}
+ setup.ModifyServerClientConfig(kubeAPIServerClientConfig)
kubeAPIServerClient, err := client.NewForConfig(kubeAPIServerClientConfig)
if err != nil {
t.Fatal(err)
and then use it like
_, kubeConfig, tearDownFn := framework.StartTestServer(ctx, t, framework.TestServerSetup{
ModifyServerRunOptions: func(opts *options.ServerRunOptions) {
// Disable ServiceAccount admission plugin as we don't have serviceaccount controller running.
opts.Admission.GenericAdmission.DisablePlugins = []string{"ServiceAccount", "TaintNodesByCondition"}
opts.APIEnablement.RuntimeConfig.Set("networking.k8s.io/v1alpha1=true")
},
ModifyServerClientConfig: func(c *clientrest.Config) {
c.Wrap(func(rt http.RoundTripper) http.RoundTripper {
return roundTripperFunc(func(req *http.Request) (*http.Response, error) {
authorizationHeaderValues.append(req.Header.Values("Authorization"))
return rt.RoundTrip(req)
})
})
},
})
defer tearDownFn()
it seams cleaner and better so we can have all the client mutations sinde the constructor.
Another options is that you build the client directly
StartTestServer
returns the rest.Config, is not that much easier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestions. I removed all changes from test/integration/scheduler/scheduler_test.go
.
@@ -468,17 +495,23 @@ func UpdateNodeStatus(cs clientset.Interface, node *v1.Node) error { | |||
func InitTestAPIServer(t *testing.T, nsPrefix string, admission admission.Interface) *TestContext { | |||
_, ctx := ktesting.NewTestContext(t) | |||
ctx, cancel := context.WithCancel(ctx) | |||
testCtx := TestContext{Ctx: ctx} | |||
testCtx := &TestContext{Ctx: ctx} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My goal was to not change the prototype of InitTestAPIServer
.
Therefore the wrapper can be set after the TestContext
and all servers have been created.
/assign @aojea |
4472576
to
5a9802d
Compare
/assign |
|
||
// RoundTrip, if set, will be called for every HTTP request going to the apiserver. | ||
// It can be used for error injection. | ||
RoundTrip func(transport http.RoundTripper, req *http.Request) (*http.Response, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StartTestServer(ctx context.Context, t testing.TB, setup TestServerSetup) (client.Interface, *rest.Config, TearDownFunc) {
returns a rest.Config, you can copy it and create a new clientset for that with your roundtripper
client, config, tearDownFn := framework.StartTestServer(ctx, t, framework.TestServerSetup{})
config.Wrap(func(rt http.RoundTripper) http.RoundTripper {
return roundTripperFunc(func(req *http.Request) (*http.Response, error) {
authorizationHeaderValues.append(req.Header.Values("Authorization"))
return rt.RoundTrip(req)
})
})
newClient, err := client.NewForConfig(config)
if err != nil {
t.Fatal(err)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hadn't found rest.Config.Wrap
. That's indeed simpler and make it possible to limit the changes to just test/integration/util/util.go
. I've switched to that - please take another look.
During scheduler_perf testing, roughly 10% of the PodSchedulingContext update operations failed with a conflict error. Using SSA would avoid that, but performance measurements showed that this causes a considerable slowdown (primarily because of the slower encoding with JSON instead of protobuf, but also because server-side processing is more expensive). Therefore a normal update is tried first and SSA only gets used when there has been a conflict. Using SSA in that case instead of giving up outright is better because it avoids another scheduling attempt.
4f2471f
to
7cac1dc
Compare
/retest |
1 similar comment
/retest |
LGTM label has been added. Git tree hash: 161c352c8fbf6248f634fb57cf0fbaffff45b7ac
|
@@ -187,6 +188,41 @@ func (p *podSchedulingState) publish(ctx context.Context, pod *v1.Pod, clientset | |||
logger.V(5).Info("Updating PodSchedulingContext", "podSchedulingCtx", klog.KObj(schedulingCtx)) | |||
} | |||
_, err = clientset.ResourceV1alpha2().PodSchedulingContexts(schedulingCtx.Namespace).Update(ctx, schedulingCtx, metav1.UpdateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you considered a regular patch? Is that any faster than SSA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't because the guidance is to use SSA. I've shared CPU and memory profiles with @apelisse, he is looking into the performance aspect.
Overall, I'm ok with this change, but I wonder if somehow we can take these updates out of the critical path. Do they need to happen in the scheduling cycle? |
#120502 captures the discussion I had with @Huang-Wei about doing these updates in the background. The gist is that it would be possible, but it implies extending the scheduler framework in a non-trivial way because right now the plugin cannot handle errors when they occur outside of the normal callback functions - it's not clear yet how to do that. We agreed to first do this PR and then come back to the topic if it turns out to be a problem in practice (= making it more important) and/or when there's time for further investigations. |
/hold cancel The hold was for squashing, which was done. I chatted again with @apelisse about whether it would make sense to wait for SSA performance enhancements and he said that those will take more time. Let's move ahead with this PR as an interim solution. @alculquicondor : okay for approval? |
Was there any conclusion about moving these calls out of Reserve? |
The conclusion was "would be nice, but not absolutely required for beta" (but I intend to work on it anyway) and "not something that needs to be solved in this PR". |
I need to look at the KEP, but I think whatever we can do to avoid impacting workloads that don't use DRA should be high in the priority list or even be a blocker for beta. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Huang-Wei, pohly The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
During scheduler_perf testing, roughly 10% of the PodSchedulingContext update operations failed with a conflict error. Using SSA would avoid that, but performance measurements showed that this causes a considerable slowdown (primarily because of the slower encoding with JSON instead of protobuf, but also because server-side processing is more expensive).
Therefore a normal update is tried first and SSA only gets used when there has been a conflict. Using SSA in that case instead of giving up outright is better because it avoids another scheduling attempt.
Which issue(s) this PR fixes:
Related-to: #120502
Special notes for your reviewer:
On my machine, "stress" shows that the new test is stable:
That one failure seems unrelated:
This is a known issue (#115514) and it should be fixed. I double-checked that
Shutdown()
is called.Needs to be investigated if it occurs in practice.I got rid of this goroutine leak by not starting the event recorder in the first place - it's not needed.Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: