Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Status of this incubator project? #104

Closed
u2mejc opened this issue Jul 10, 2018 · 10 comments
Closed

Status of this incubator project? #104

u2mejc opened this issue Jul 10, 2018 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@u2mejc
Copy link

u2mejc commented Jul 10, 2018

Hello! This feature seems fundamental to strong bin packing, but it's been months since the last update.

Is this project still active and is there a timeline to have it merged into an official K8 release?

@aveshagarwal
Copy link
Contributor

aveshagarwal commented Jul 10, 2018

Hello! This feature seems fundamental to strong bin packing, but it's been months since the last update.

I am not sure why you think so but the last release was 29 days ago https://github.com/kubernetes-incubator/descheduler/releases/tag/v0.6.0?

Is this project still active

Yes,

and is there a timeline to have it merged into an official K8 release?

I dont think there was ever a plan to merge this project into official K8 release.

Is there something you are looking for and does not exist? Or could you tell your exact concerns? Are you looking for "strong bin packing" feature as you mentioned it before?

@u2mejc
Copy link
Author

u2mejc commented Jul 11, 2018

Hello! This feature seems fundamental to strong bin packing, but it's been months since the last update.

I am not sure why you think so but the last release was 29 days ago https://github.com/kubernetes-incubator/descheduler/releases/tag/v0.6.0?

Is this project still active

Yes,

Awesome, thanks Avesh! The github front page of the project renders by months, not days, and the visual is that this project hadn't been touched in 2 months.

and is there a timeline to have it merged into an official K8 release?

I dont think there was ever a plan to merge this project into official K8 release.

Is there something you are looking for and does not exist? Or could you tell your exact concerns? Are you looking for "strong bin packing" feature as you mentioned it before?

I have an internal ticket tracking this feature, with the impression (hope?) that it may mature into a built in feature in K8.

I was directed here after asking on Slack why K8 wasn't bin packing under utilized nodes? (Note: Pod prioritization was also brought up as another option)


I'm sure you familiar, but for sake of discussion... here are four nodes, each has 10GB of memory. Two nodes have 4GB of requested by pods, the other two have 6GB requested. (4+4+6+6=20GB requested out of 40GB available.

+--+ +--+ +--+ +--+
|  | |  | |  | |  |
|  | |  | |  | |  |
|  | |  | |  | |  |
|  | |  | |  | |  |
|  | |##| |  | |##|
|  | |##| |  | |##|
|##| |##| |##| |##|
|##| |##| |##| |##|
|##| |##| |##| |##|
|##| |##| |##| |##|
+--+ +--+ +--+ +--+

Now I'd like to schedule a two new pods with 7GB memory request. This is where I'm a little confused.

Without a descheduler (or higher pod priority), K8 will sit there saying No nodes are available that match all of the predicates: insufficient memory. Might the autoscaler add two nodes to meet scheduling demand for another 20GB, leaving 26GB of un-utilized RAM, and a 50% increase in VM cost?

What I'd like to see (and expected before I started using K8) was that the scheduler would see there insufficient memory, and then check if there would be sufficient memory if some pods were rescheduled.

It's my impression that you have written what I'm looking for and that true bin-packing could be achieved if this feature was rolled up in a hook for the scheduler? Did I grok this correctly?

@aveshagarwal
Copy link
Contributor

Default scheduler in kube has a non-default priority function: MostResourceAllocation (https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithm/priorities/most_requested.go). This priority function favors node with most used resources. It would help at the time of pod admission but not after that. So you could see if by enabling MostResourceAllocation would help you to some extent.

We have plans to add strong bin packing algorithm to descheduler that will work with MostResourceAllocation priority function but have not done so far.

@u2mejc
Copy link
Author

u2mejc commented Jul 12, 2018

Awesome, thank you for the update. Yeah the MostResourceAllocation will help with initial scheduling, but as you mentioned, "not after that". Hopefully others will agree, saving thousands of dollars a year by improving bin packing would be a great feature request for K8 1.12 or 1.13. Keep up the good work until then! 👍

@Evesy
Copy link

Evesy commented Aug 2, 2018

We too would definitely be interested in something similar; in non production environments we'd want to cram nodes as much as possible, and have descheduler remove pods on low utilised nodes in the hopes that the cluster autoscaler will then remove them.

@aveshagarwal Is MostResourceAllocation something that can be specified as part of pods spec or something similar, or is it an option for scheduler? I'd be interested in enabling it but not sure it's possible on GKE?

@aveshagarwal
Copy link
Contributor

We too would definitely be interested in something similar; in non production environments we'd want to cram nodes as much as possible, and have descheduler remove pods on low utilised nodes in the hopes that the cluster autoscaler will then remove them.

Yes descheduler could do that but does not do it now. Also, it is possible that you might not need descheduler because as soon as cluster autoscaler removes nodes, any pods on those nodes would get recreated on the remaining nodes, i think.

@aveshagarwal Is MostResourceAllocation something that can be specified as part of pods spec or something similar, or is it an option for scheduler?

No, not pod spec, you would have to modify scheduler policy for that.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

ingvagabund pushed a commit to ingvagabund/descheduler that referenced this issue Jan 5, 2024
WRKLDS-884: Source Makefile from github.com/openshift/build-machinery-go
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants