-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mesos] Report and enforce resource constraints #6828
Comments
jayunit, could you add a bit more detail here? Assigning priority 3 in the mean time. |
I too would like to know the approach. We should validate requires and limits in resource requirements. Sent from my iPhone
|
@jayunit100 to get his attention, as I mistyped it above ;-) |
kubelets are deployed differently in different environments, so this e2e test confirms that kubelets themselves are properly honoring resources.... and maybe helps to codify things around resource semantics as well. So, we can
The original motivation for this JIRA is specifically that we were brainstorming about how we could confirm that kube on mesos honors resource restrictions the same way kube does, and to do this, we need a test which enforces resource restrictions... The mesos folks @ConnorDoyle @timothysc will have better coordinates for specifying this test. |
There is a e2e test already which checks that cpu resources are validated: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/scheduler_predicates.go#L236 |
perhaps if we had a mesos e2e test that ran the petstore example we'd meet the test-for-containment goals of this ticket |
I still don't see what we need in addition to the mentioned test above. How does the petstore help? What we really would need in addition would be a real OOM triggering test. |
We just spent a bunch of time on our scheduler w/ respect to Mesos "roles" support (think static resource reservations per node, per "role"). One point that came up was related to resource limits vs. requested. Right now the scheduler attempts to allocate resources according to max(limits,requested) with no way to accommodate situations when there are "requested" resources available but not "limits" quantities. There's an open Mesos JIRA for allowing the scheduler to request additional resources for tasks over time, and that seems like a good fit for a use case wherein our scheduler could allocate "requested" resources and then, over time, allocation additional resources up to "limits". Another thought is to blend revocable and normal resources for tasks, such that "requested" resources come from the normal pool, and the difference (between requested and limits) come from the revocable pool. A fly in the ointment is that the Mesos QOS controller treats the entire container (executor) as revocable in this case and will heartlessly kill the whole thing if revocable resources need to be reclaimed (see http://mesos.apache.org/documentation/latest/oversubscription/). |
@jayunit100 There are no sig labels on this issue. Please add a sig label by: |
/assign |
/close please reopen on kube-on-mesos |
When we launch pods , we'll want to confirm that memory, disk, cpu and so on constraints are honored. Frameworks like mesos will translate these parameters.
Blocked by #5671
@ConnorDoyle @timothysc @jayunit100
xref mesosphere/kubernetes-mesos#447
The text was updated successfully, but these errors were encountered: