Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix handling of resource management #67

Closed
vefimova opened this issue Oct 5, 2016 · 5 comments
Closed

Fix handling of resource management #67

vefimova opened this issue Oct 5, 2016 · 5 comments
Assignees

Comments

@vefimova
Copy link
Contributor

vefimova commented Oct 5, 2016

By default if resource limits are not set they will be set to zero:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/api/v1alpha1/runtime/api.pb.go#L1118

what causes a try to create VM with zero vcpu set:

libvirt_1  | 2016-10-05 08:47:48.169+0000: 5905: error : virDomainDefParseXML:15299 : internal error: CPU IDs in <numa> exceed the <vcpu> count
@vefimova vefimova self-assigned this Oct 5, 2016
@jellonek
Copy link
Contributor

jellonek commented Oct 5, 2016

Very good catch and the cause why my branch is failing after rebase :)
For dev purpose will hardcode some values there until You will provide PR with correct resolution.

@vefimova
Copy link
Contributor Author

vefimova commented Oct 6, 2016

Committed fix is temporary workaround

@vefimova
Copy link
Contributor Author

vefimova commented Oct 6, 2016

@jellonek @nhlfr

Want to share my considerations re cpu resource management:

So for resource management k8s have

  • optional settings for CFS (Completely Fair Scheduler):
    • quotaPeriod
    • cpuPeriod
      These put limits for i/o bandwidth limits, not cpu resource usage itself.
  • and sharesPerCPU - relative value to set weighted share of available cpu resources for VM
    which is set by default=2 (1 cpu = 1024)

So, then, for example nova has cpu overcommitment ratio set as 16:1 => VMsPerHost = vCPUs*16
But for virtlet wihout setting sharesPerCPU directly it would be as 512:1, which is obviously is not acceptable at all.

There could be added limits per namespace, then default will be set to limit if there is no request for pod, but we can't rely on that. Such low default is probably unexpectable for us, to have the same ratio
we need to have 64. Probably, we need to try to propose having kubelet option for that, WDYT?

So for now I'm going to prepare patch which will set all available vCPUs for each VM with defined CPU Tuning tag with specified shares. As long as nothing is set, cpu time will be, obviously, spreaded equally among VMs, there will be problems if we start to set "requests" for cpu for some of VMs and use defaults for others, and ofcource if we overload the host by amount of created VMs, but for now it's POC, right? :) Do you agree with that?

vefimova added a commit to vefimova/virtlet that referenced this issue Oct 7, 2016
vefimova added a commit to vefimova/virtlet that referenced this issue Oct 7, 2016
@jellonek
Copy link
Contributor

jellonek commented Oct 7, 2016

Sorry for not answering earlier - idea looks great and I like your implementation of it :)

vefimova added a commit to vefimova/virtlet that referenced this issue Oct 7, 2016
@jellonek
Copy link
Contributor

As it's already merged - closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants