-
Notifications
You must be signed in to change notification settings - Fork 650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of this incubator project? #104
Comments
I am not sure why you think so but the last release was 29 days ago https://github.com/kubernetes-incubator/descheduler/releases/tag/v0.6.0?
Yes,
I dont think there was ever a plan to merge this project into official K8 release. Is there something you are looking for and does not exist? Or could you tell your exact concerns? Are you looking for "strong bin packing" feature as you mentioned it before? |
Awesome, thanks Avesh! The github front page of the project renders by months, not days, and the visual is that this project hadn't been touched in 2 months.
I have an internal ticket tracking this feature, with the impression (hope?) that it may mature into a built in feature in K8. I was directed here after asking on Slack why K8 wasn't bin packing under utilized nodes? (Note: Pod prioritization was also brought up as another option) I'm sure you familiar, but for sake of discussion... here are four nodes, each has 10GB of memory. Two nodes have 4GB of requested by pods, the other two have 6GB requested. (4+4+6+6=20GB requested out of 40GB available.
Now I'd like to schedule a two new pods with 7GB memory request. This is where I'm a little confused. Without a descheduler (or higher pod priority), K8 will sit there saying What I'd like to see (and expected before I started using K8) was that the scheduler would see there insufficient memory, and then check if there would be sufficient memory if some pods were rescheduled. It's my impression that you have written what I'm looking for and that true bin-packing could be achieved if this feature was rolled up in a hook for the scheduler? Did I grok this correctly? |
Default scheduler in kube has a non-default priority function: MostResourceAllocation (https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithm/priorities/most_requested.go). This priority function favors node with most used resources. It would help at the time of pod admission but not after that. So you could see if by enabling MostResourceAllocation would help you to some extent. We have plans to add strong bin packing algorithm to descheduler that will work with MostResourceAllocation priority function but have not done so far. |
Awesome, thank you for the update. Yeah the MostResourceAllocation will help with initial scheduling, but as you mentioned, "not after that". Hopefully others will agree, saving thousands of dollars a year by improving bin packing would be a great feature request for K8 1.12 or 1.13. Keep up the good work until then! 👍 |
We too would definitely be interested in something similar; in non production environments we'd want to cram nodes as much as possible, and have descheduler remove pods on low utilised nodes in the hopes that the cluster autoscaler will then remove them. @aveshagarwal Is |
Yes descheduler could do that but does not do it now. Also, it is possible that you might not need descheduler because as soon as cluster autoscaler removes nodes, any pods on those nodes would get recreated on the remaining nodes, i think.
No, not pod spec, you would have to modify scheduler policy for that. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
WRKLDS-884: Source Makefile from github.com/openshift/build-machinery-go
Hello! This feature seems fundamental to strong bin packing, but it's been months since the last update.
Is this project still active and is there a timeline to have it merged into an official K8 release?
The text was updated successfully, but these errors were encountered: