New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In SelectorSpreadPriority, consider all pods when scoring for zone #73711
In SelectorSpreadPriority, consider all pods when scoring for zone #73711
Conversation
Hi @Ramyak. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the PR!
please add a release note instead of NONE in the PR description, that explains the change.
/ok-to-test
/priority important-longterm
pkg/scheduler/testing/fake_lister.go
Outdated
@@ -53,6 +53,23 @@ func (f FakePodLister) List(s labels.Selector) (selected []*v1.Pod, err error) { | |||
return selected, nil | |||
} | |||
|
|||
// Search returns []*v1.Pod matching all selectors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks just returns selected []*v1.Pod
is ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated comment. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I mean Search
only needs to return []*v1.Pod
, error
seems redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/kubernetes/kubernetes/pull/73711/files#diff-6f5753a252698c11328724f8ed0b007cR51
error here is not optional. It is part of the interface definition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all right. thanks
return true | ||
} | ||
|
||
return cache.FilteredList(matchAllSelctors, selectors[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the length of selectors is always bigger than 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is checked in CalculateSpreadPriorityReduce which calls Search as far as I read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review. Added a len(selectors) == 0
check here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good to me, but I didn’t check surroundings.
maxCountByNodeName = result[i].Score | ||
if len(selectors) > 0 { | ||
pods, _ := s.podLister.Search(selectors) | ||
if pods != nil && len(pods) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can drop the nil check, because Go internally handles it in len() and will return 0 length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Thanks.
ad1cae3
to
fbbb3e1
Compare
fbbb3e1
to
43f88a7
Compare
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@Ramyak can you please add a release note entry, just follow https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md |
43f88a7
to
154aaea
Compare
/test pull-kubernetes-e2e-gce-100-performance |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@Ramyak would you rebase and resolve the conflicts, please? |
Is it possible to reproduce the above in a unit test so that we know the effectiveness of the fix ? |
154aaea
to
cdd072f
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Ramyak The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cdd072f
to
43510b7
Compare
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@Ramyak: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Predicates filter nodes. Existing pods on those nodes will be not be counted when calculating max pods per zone, resulting in imbalanced cluster.
If one zone is more loaded than other zones, this makes it worse by getting more and more pods scheduled in the same zone.
Fix: Consider all pods matching all the selectors when spreading across zones.
Which issue(s) this PR fixes:
Fixes #72916
Special notes for your reviewer:
Follow up to
PR: #72801
Issue: #71327
Does this PR introduce a user-facing change?:
/sig scheduling