-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix handling of resource management #67
Comments
Very good catch and the cause why my branch is failing after rebase :) |
Committed fix is temporary workaround |
Want to share my considerations re cpu resource management: So for resource management k8s have
So, then, for example nova has cpu overcommitment ratio set as 16:1 => VMsPerHost = vCPUs*16 There could be added limits per namespace, then default will be set to limit if there is no request for pod, but we can't rely on that. Such low default is probably unexpectable for us, to have the same ratio So for now I'm going to prepare patch which will set all available vCPUs for each VM with defined CPU Tuning tag with specified shares. As long as nothing is set, cpu time will be, obviously, spreaded equally among VMs, there will be problems if we start to set "requests" for cpu for some of VMs and use defaults for others, and ofcource if we overload the host by amount of created VMs, but for now it's POC, right? :) Do you agree with that? |
corresponding issue is Mirantis#67
corresponding issue is Mirantis#67
Sorry for not answering earlier - idea looks great and I like your implementation of it :) |
corresponding issue is Mirantis#67
As it's already merged - closing. |
By default if resource limits are not set they will be set to zero:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/api/v1alpha1/runtime/api.pb.go#L1118
what causes a try to create VM with zero vcpu set:
The text was updated successfully, but these errors were encountered: