New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify priority functions pattern #51455

Closed
guangxuli opened this Issue Aug 28, 2017 · 8 comments

Comments

Projects
None yet
6 participants
@guangxuli
Copy link
Contributor

guangxuli commented Aug 28, 2017

What happened:
Unify priority functions pattern by using map/reduce pattern. In current priority function implementation, there are two patterns:

  • map/reduce
  • one whole function

There are still three priority funcions(the whole one) named CalculateSpreadPriority CalculateInterPodAffinityPriority CalculateAntiAffinityPriority hadn't use map/reduce pattern.

I have completed one refactoring named CalculateSpreadPriority insteaded by CalculateSpreadPriorityMap/CalculateSpreadPriorityReduce, and had send a pr(#51192) for this.

If all priority functions refactoring done by using map/reduce pattern, we will benifit from:

  • simply PrioritizeNodes process
  • simply priority functions registered process, etc.
  • remove all deprecated stuff, e.g PriorityFunctionFactory
  • unify RegisterPriorityFunction and RegisterPriorityFunction2

I am not list all things, just hopes to initiate analyzing.
What you expected to happen:

Unify the priority functions pattern if we can.
not sure all priority functions fit to map/reduce pattern in current definition. but at least CalculateSpreadPriority is definitely the one need to refactor.

/cc @k82cn @resouer @davidopp

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Aug 28, 2017

/sig scheduling

@resouer

This comment has been minimized.

Copy link
Member

resouer commented Aug 30, 2017

I prefer to have inputs from @wojtek-t for further discussion

@wojtek-t

This comment has been minimized.

Copy link
Member

wojtek-t commented Aug 30, 2017

Yes - I've always wanted to do that but never really had time for it.

I'm happy to review your PR for spreading, but will not have time before code freeze (and I think we shouldn't rush for 1.8 with that).

The Pod affinity/anti-affinity might be more tricky to do, because they really depend on the whole state of the cluster. But maybe it's actually time to thing and solve pod affinity/anti-affinity more properly.
If someone has time to optimize this (I mean pod affinity/anti-affinity), I will very strongly support this effort (and I'm pretty sure @davidopp will agree with that) and am happy to provide feedback and/or reviews on that.

@k82cn

This comment has been minimized.

Copy link
Member

k82cn commented Aug 31, 2017

@wojtek-t , one more input for this is to add an interface for those algorithm to include Name, ConditionChanged, Mandatory, e.g. for kubelet and scheduler, both of them should enforce mandatory predicates; for equal cache, we can ask algorithm whether the condition it care where changed.

@guangxuli

This comment has been minimized.

Copy link
Contributor Author

guangxuli commented Aug 31, 2017

thanks @wojtek-t
Yes, the Pod affinity/anti-affinity seems not fit to use map/reduce pattern directly. It will traverse all relevant nodes in the cluster to compute count for one pod. I had try to add the nodes list item to matadata or add a new nodes list parameter to current Map function prototype for getting the all nodes info, but all these attempt seems so ugly.

@wojtek-t

This comment has been minimized.

Copy link
Member

wojtek-t commented Aug 31, 2017

@wojtek-t , one more input for this is to add an interface for those algorithm to include Name, ConditionChanged, Mandatory, e.g. for kubelet and scheduler, both of them should enforce mandatory predicates; for equal cache, we can ask algorithm whether the condition it care where changed.

Sorry, I think I'm not really following. Can you clarify?

Yes, the Pod affinity/anti-affinity seems not fit to use map/reduce pattern directly. It will traverse all relevant nodes in the cluster to compute count for one pod. I had try to add the nodes list item to matadata or add a new nodes list parameter to current Map function prototype for getting the all nodes info, but all these attempt seems so ugly.

Yes - this one is tricky. I think no matter how hard we try, if we wan to solve if efficiently, we need to consider all nodes at once. It seems that the workaround here might be to actually precompute results in the precomputation phase, and in real map/reduce functions just grab the results from the metadata that is passed there. I can't see any better option for doing this.

Anyway - if you have any better idea, let me know - I'm open to any suggestions.

@wojtek-t

This comment has been minimized.

Copy link
Member

wojtek-t commented Aug 31, 2017

BTW - this issue is actually a duplicate of long stanging one: #24246

We should probably close this and move this discussion there.

@guangxuli

This comment has been minimized.

Copy link
Contributor Author

guangxuli commented Nov 2, 2017

will close this issue, per #24246.

@guangxuli guangxuli closed this Nov 2, 2017

k8s-merge-robot added a commit that referenced this issue Nov 19, 2017

Merge pull request #51192 from guangxuli/scheduler_priority_functions…
…_map_reduce

Automatic merge from submit-queue (batch tested with PRs 51192, 55010). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Refactoring of priority function(CaculateSpreadPriority) by using map/reduce pattern

**What this PR does / why we need it**:
Ref #24246. exactly ref #51455, the PR aim to unify priority functions(deprecated) by using map/reduce pattern.
This is the first step, my todo list(WIP):
- interpod-affnity priority funciton refactoring 
- the priority funcitons register pattern
- deprecated priority function definition and all related logic. etc.

**Which issue this PR fixes**:

no issue, just unify the priority functions pattern.

**Special notes for your reviewer**:
none
**Release note**:
none

k8s-merge-robot added a commit that referenced this issue Jan 4, 2018

Merge pull request #56317 from guangxuli/refactor_priority_CalculateA…
…ntiAffinityPriority

Automatic merge from submit-queue (batch tested with PRs 57696, 57821, 56317). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Change priority function(CalculateAntiAffinityPriority) into Map/Reduce pattern

**What this PR does / why we need it**:
Ref #24246. exactly ref #51455, the PR aim to unify priority functions(deprecated) by using map/reduce pattern. 
Previous related PR is #51192
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
None
**Special notes for your reviewer**:

**Release note**:

```release-note
None
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment