Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature(scheduler): implement ClusterEventWithHint to filter out useless events #118551

Merged
merged 1 commit into from Jun 26, 2023

Conversation

sanposhiho
Copy link
Member

@sanposhiho sanposhiho commented Jun 8, 2023

What type of PR is this?

/kind feature

What this PR does / why we need it:

The EventsToRegister in EnqueueExtension changed the return value from ClusterEvent to ClusterEventWithHint. ClusterEventWithHint allows each plugin to filter out more useless events via the callback function named QueueingHintFn.
When the scheduling queue receives a cluster event, before moving each Pod from unschedulable pod pool to activeQ/backoffQ, it will call QueueingHintFn of plugins that rejected each Pod in the previous scheduling cycle.
Depending on the value returned from QueueingHintFn, the scheduling queue changes how it queues each Pod:

  • if more than one QueueingHintFn returns QueueImmediately, it queues Pod to activeQ.
  • If no QueueingHintFn returns QueueImmediately and more than one plugin returns QueueAfterBackoff, it queues Pod to backoffQ if Pod is backing off, or to activeQ if Pod's backoff has already finished.
  • If all QueueingHintFn return QueueSkip, it puts this pod back to the unschedulable pod pool

Which issue(s) this PR fixes:

Part of #114297 (PoC: #117844)

Special notes for your reviewer:

/hold

This is a big change, we should involve another approver to review.

Does this PR introduce a user-facing change?

Action required for the custom scheduler plugin developers. 
Here's the breaking change in `EnqueueExtension` in the scheduling framework. 
The `EventsToRegister` in `EnqueueExtension` changed the return value from `ClusterEvent` to `ClusterEventWithHint`. `ClusterEventWithHint` allows each plugin to filter out more useless events via the callback function named `QueueingHintFn`.
When the scheduling queue receives a cluster event, before moving each Pod from unschedulable pod pool to activeQ/backoffQ, it will call QueueingHintFn of plugins that rejected each Pod in the previous scheduling cycle.
Depending on the value returned from QueueingHintFn, the scheduling queue changes how it queues each Pod:
- if more than one QueueingHintFn returns QueueImmediately, it queues Pod to activeQ.
- If no QueueingHintFn returns QueueImmediately and more than one plugin returns QueueAfterBackoff, it queues Pod to backoffQ if Pod is backing off, or to activeQ if Pod's backoff has already finished.
- If all QueueingHintFn return QueueSkip, it puts this pod back to the unschedulable pod pool

Having appropriate QueueingHintFn contributes to reducing useless retries and thus improves the overall scheduler's performance.

**How can I migrate?**

For backward compatibility, nil `QueueingHintFn` is treated as always returning QueueAfterBackoff. 
So, if you want to just keep the existing behavior, you can register `ClusterEventWithHint` with no `QueueingHintFn` in it. 
But, registering appropriate `QueueingHintFn` is, of course, better from a scheduling performance perspective.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jun 8, 2023
@k8s-ci-robot k8s-ci-robot added area/test sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. approved Indicates a PR has been approved by an approver from all required OWNERS files. release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. labels Jun 8, 2023
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 8, 2023
@sanposhiho
Copy link
Member Author

/triage accepted
/priority important-longterm

We want to prioritize this because it'll contribute to efficient enqueueing overall along with other changes for #114297.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jun 8, 2023
@sanposhiho sanposhiho force-pushed the event-to-register branch 2 times, most recently from a6be258 to b298320 Compare June 8, 2023 01:03
Copy link
Contributor

@pohly pohly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 26, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: e6836322b5a947db2d3819d548f4cd63eac50290

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alculquicondor, pohly, sanposhiho

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sanposhiho
Copy link
Member Author

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 26, 2023
@k8s-ci-robot k8s-ci-robot merged commit d971407 into kubernetes:master Jun 26, 2023
15 checks passed
SIG Node CI/Test Board automation moved this from Archive-it to Done Jun 26, 2023
SIG Node PR Triage automation moved this from Needs Reviewer to Done Jun 26, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.28 milestone Jun 26, 2023
Copy link
Member

@Huang-Wei Huang-Wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sanposhiho !

The logic looks good, esp. the logic to honor unschedulablePlugins prior to evaluating schedulingHint, which can guarantee the impl. is performant. Some comments below.

pkg/scheduler/scheduler_test.go Show resolved Hide resolved
// It's rare that a plugin implements EnqueueExtensions but returns nil.
// We treat it as: the plugin is not interested in any event, and hence pod failed by that plugin
// cannot be moved by any regular cluster event.
if len(events) == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we keep equivalent logic present? and test it with a plugin with empty events.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We keep the equivalent logic in buildQueueingHintMap in scheduler.go and tested here:
https://github.com/sanposhiho/kubernetes/blob/6f8d38406a7f16fc9cc9b72789a9b826105b1b54/pkg/scheduler/scheduler_test.go#L919

(But, for now, regardless of how we treat such plugins, we register all events into EventHandler: #118551 (comment))

I'll move the comment to buildQueueingHintMap to state about this case.

// As converts two objects to the given type.
// Both objects must be of the same type. If not, an error is returned.
// nil objects are allowed and will be converted to nil.
func As[T runtime.Object](oldObj, newobj interface{}) (T, T, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hrm, what's the point of this function, and is it used anywhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pkg/scheduler/framework/types.go Show resolved Hide resolved
//
// NOTE: this function assumes lock has been acquired in caller
func (p *PriorityQueue) requeuePodViaQueueingHint(logger klog.Logger, pInfo *framework.QueuedPodInfo, schedulingHint framework.QueueingHint, event string) string {
if schedulingHint == framework.QueueSkip {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this won't happen, right? as in the caller side, the logic would have returned early when the hint is skip.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the confusion, but it can happen after #118438.

@sanposhiho
Copy link
Member Author

@Huang-Wei Thanks for the additional reviews, I replied some + follow up some in #119077 🙏

@@ -78,6 +78,62 @@ const (
WildCard GVK = "*"
)

type ClusterEventWithHint struct {
Copy link
Member

@ahg-g ahg-g Jul 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name look odd and too specific; why are we adding a new type? why not place QueueingHintFn inside ClusterEvent?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd agree with renaming it if we can find a better name, but I prefer to keep ClusterEvent struct as it is (= without hint function), separated from the struct which has QueueingHintFn and event.
ClusterEvent literary represents the event itself, but the function to help the scheduling queue requeue Pod is not a part of event.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the question is: do we ever use ClusterEvent on its own?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in some logic like MoveAllToActiveOrBackoffQueue(), ClusterEvent w/o hintFn is used.

@sanposhiho
Copy link
Member Author

@ahg-g Thanks for the additional comments, I addressed them in #119077

continue
}

if h == framework.QueueImmediately {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revisit this logic, if a pod is rejected by several plugins, and only one plugin's queueingHint returns immediately, how can we tell the pod is probably schedulable? It might be rejected by another plugin again.

Then the more reasonable logic here is - If all the unschedulable plugins return QueueImmediately, then we'll enqueue the pod for scheduling AFAP.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this at some point. The thinking was the following:

Let's imagine that a pod is unschedulable for 2 reasons:

  • pod affinity
  • node resources

for the pod to become schedulable, two events need to happen, for example: a pod is scheduled, a pod finishes.
For each event, we will have the following responses from the hints: (skip, requeue) and (requeue, skip), so we should requeue in both (I'm not taking in consideration the "Immediately" part here, to simplify).

But rethinking about it, after the we observe the first event, the pod would be requeued, and the pod would only have 1 reason remaining for not to be schedulable. Then, all plugins (the only one left) could return QueueImmediately. I think you might be right. Thoughts @pohly @sanposhiho ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me explain my take on this topic. cc @AxeZhan @ahg-g as we're talking about a similar thing in another thread.

What we need first is a clear definition of when to return QueueAfterBackoff and when to return QueueImmediately. Because it's currently unclear (ref).
And, we should consider how to handle QueueImmediately based on that.


First, BackoffQ is a light way of keeping throughput high by preventing pods that are "permanently unschedulable" from blocking the queue. (quote from #117561 (comment))

Based on that, in my opinion, when to return QueueAfterBackoff and when to return QueueImmediately should be decided not by the possibility that the event can make this Pod schedulable or not, but by the reason why the Pod was in the unschedQ now.
We can split the reasons why a Pod puts back into the scheduling queue into these:

  • scheduling failure, like PodAffinity rejects Pod in Filter, NodeResourceFit rejects Pod in Filter, etc.
  • non scheduling failure, like DRA needs to wait for the claim to be provisioned, DRA needs to wait for schedulingcontext to be updated by the driver etc.

So, I'm thinking that they equal when to return QueueAfterBackoff (the former) and when to return QueueImmediately (the latter).

We should always force Pods to honor backoff if they're rejected by the scheduling failure.
Because, in any cases, we cannot say that this Pod will 100% get scheduled in the next scheduling cycle because the cluster's situation keeps changing from moment to moment and any plugins can reject Pod in the next scheduling cycle.

OTOH, it's OK to ignore backoff if they're not rejected by the scheduling failure. Such Pods don't have the obligation to go through backoffQ because they're not rejected by the scheduling failure.

These are my thought. And specifically speaking, DRA is the only plugin among in-tree plugins that causes Pods to be pushed back into the scheduling queue by non scheduling failure. Meaning, DRA is the only plugin that can return QueueImmediately. Other plugins like NodeAffinity we discussed are all Filter plugins, and cause Pods to be pushed back into the scheduling queue by scheduling failure. Meaning they all can return QueueAfterBackoff only.

And, to answer the question from @kerthcet at the top of this thread, I prefer to keep the current logic. Because if we go with my definition, we can assume all plugins return QueueImmediataly only when they think the Pod is in the unschedQ due to non scheduling failure, and then again Pods failed by non scheduling failure don't have the obligation to go through backoffQ.

What do you all think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, DRA is the only plugin that could return QueueImmediately given the guidance.
However, if DRA was NOT the only reason why a pod is unschedulable, then does it make sense to queue immediately? Do we have the ability to distinguish that? I suppose not.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if DRA was NOT the only reason why a pod is unschedulable, then does it make sense to queue immediately?

It makes sense if we follow my guidance above.
Regardless of whether DRA is the only reason or there are many other unschedulable plugins registered for a Pod as well, either way, we cannot be completely certain that a Pod will be successfully scheduled when an arrived event resolves DRA's past rejection. That depends on how each plugin is implemented and how the cluster situation gets changed, which we cannot completely manage.

So, the only thing we do is to determine whether the past rejection is due to scheduling failure or non scheduling failure, and then decide to honor backoff or skip backoff only based on that. We can skip backoff when non scheduling failure, because there is no reason that such Pods experience backoff, not because it's a higher possibility that a Pod can be schedulable in the next scheduling cycle.

Do we have the ability to distinguish that? I suppose not.

Currently No. But, given QueueImmediately only comes from the DRA plugin among in-tree plugins at least for now, we can rethink the improvement when needed based on the usecase at that time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the DRA needs to wait for the claim to be provisioned is also a scheduling failure and can happen together with other failure plugins. We have similar situation in volume binding when the PV isn't exist yet.

Another input: If we return QueueSkip in one failed plugin, can we tell that the pod doesn't worth requeueing, the hint should not be override by QueueAfterBackoff (I didn't consider QueueImmediately here).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DRA is different from PV as you can see it has Reserve() to reserve/allocate claims. (it has WaitForFirstConsumer etc)
https://github.com/kubernetes/kubernetes/blob/v1.27.1/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go#L700

That's obviously not a scheduling failure, that just waiting for the external resource driver to do something, while the scheduler successfully decides where the Pod can go.

Another input: If we return QueueSkip in one failed plugin, can we tell that the pod doesn't worth requeueing, the hint should not be override by QueueAfterBackoff

No, we shouldn’t do that. We need to remember that the unschedulable plugins (set by the filter plugins) mean that they rejected some Nodes, don’t mean that all of them rejected all Nodes. So even if pluginA and pluginB are registered as unschedulable plugins, maybe we only need to solve the failure of either of them to get Pod scheduled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one proposal from the discussion here: #119517
PTAL when get a chance. 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Development

Successfully merging this pull request may close these issues.

None yet

7 participants