Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Balanced resource allocation priority to include volume count on nodes. #60525
Scheduler balanced resource allocation priority to include volume count on nodes.
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
changed the title from
[WIP] Balanced resource allocation priority to include volume count on nodes.
Balanced resource allocation priority to include volume count on nodes.
Mar 3, 2018
Sample usage of structure could be found at https://github.com/ravisantoshgudimetla/kubernetes/blob/c6b9f133fe381f91394df02b6cf0f4c41dd5d6fe/pkg/scheduler/algorithm/priorities/resource_allocation.go. Please let me know your thoughts on the approach.
Thats why PR has grown to 39 files and counting :(.
I have started on similar lines but putting this struct in nodeinfo struct would be cleaner. I will use this approach.
Good point. Till now, most of fields in struct are being updated only at one location. I will make sure that we have locking in place for transient node info.
2 similar comments
[APPROVALNOTIFIER] This PR is APPROVED
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Mar 31, 2018
14 checks passed
@k82cn @msau42 @ravisantoshgudimetla @bsalamat I am sorry I did not get a chance to comment on this before, but one big problem with what we merged in this PR is - it will only work for Azure, AWS and GCE volume types. It will not work for any other volume type.
Also it sounds like this proposal set out to fix "balance PVCs on node", but in reality because it requires a maximum limit of volumes, it will only EVER work for volume types that have such upper limits. In a nutshell - this proposal will perhaps not work for something like glusterfs, iscsi etc for foreseeable future.
@gnufied I think this works for any volume type as long as max limit on volumes parameter is set.
I think the limit is coming from max number of volumes that could be attached to a node irrespective of volume types like glusterfs, iscsi etc. IOW, even if glusterfs provides unlimited volumes, there is a limit on the number of volumes that could be attached to a machine depending on Operating system running and filesystem used.
The problem I was talking about is - since Kubernetes have no knowledge of applicable limits for volume types like glusterfs etc, the balanced volume allocation will not work for it. Even though, there indeed will be a theoretical limit for glusterfs etc (before it saturates the network), there is no way that information is exposed inside kubernetes. There is no way for admin to set those limits.
Discussed this offline with @jsafrane , @childsb and we think that - it may be possible to return a "dummy" upper limit for those volumes types (a high number such as 1000 lets say) - so as fractions that this proposal requires can still be calculated. Do we really care about real maximum volume count for purpose of calculating the fractions? It appears to me that - any high number should do the job and still this design will work? @ravisantoshgudimetla correct me if I am wrong.