Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mesos] Report and enforce resource constraints #6828

Closed
jayunit100 opened this issue Apr 14, 2015 · 13 comments
Closed

[mesos] Report and enforce resource constraints #6828

jayunit100 opened this issue Apr 14, 2015 · 13 comments
Assignees
Labels
area/test needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@jayunit100
Copy link
Member

When we launch pods , we'll want to confirm that memory, disk, cpu and so on constraints are honored. Frameworks like mesos will translate these parameters.

Blocked by #5671

@ConnorDoyle @timothysc @jayunit100

xref mesosphere/kubernetes-mesos#447

@ghost ghost added area/test area/test-infra priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Apr 14, 2015
@ghost
Copy link

ghost commented Apr 15, 2015

jayunit, could you add a bit more detail here? Assigning priority 3 in the mean time.

@derekwaynecarr
Copy link
Member

I too would like to know the approach. We should validate requires and limits in resource requirements.

Sent from my iPhone

On Apr 15, 2015, at 7:36 PM, Quinton Hoole notifications@github.com wrote:

jayunit, could you add a bit more detail here? Assigning priority 3 in the mean time.


Reply to this email directly or view it on GitHub.

@ghost
Copy link

ghost commented Apr 15, 2015

@jayunit100 to get his attention, as I mistyped it above ;-)

@jayunit100
Copy link
Member Author

kubelets are deployed differently in different environments, so this e2e test confirms that kubelets themselves are properly honoring resources.... and maybe helps to codify things around resource semantics as well. So, we can

  • launch containers which confirm that the maximum resources available correspond to the constraints provided,
  • launch several containers which run a cpu intensive job, and verify that the ones with more CPU allocations finish first,
    and so on.

The original motivation for this JIRA is specifically that we were brainstorming about how we could confirm that kube on mesos honors resource restrictions the same way kube does, and to do this, we need a test which enforces resource restrictions...

The mesos folks @ConnorDoyle @timothysc will have better coordinates for specifying this test.

@karlkfi
Copy link
Contributor

karlkfi commented Aug 7, 2015

/cc @jdef This is a little old.
I think this was resolved by #11230
Can you confirm?

@jdef
Copy link
Contributor

jdef commented Aug 7, 2015

@karlkfi #11230 does not implement an e2e test that confirms resource accounting is working as advertised, which is the subject of this issue.

@sttts
Copy link
Contributor

sttts commented Aug 18, 2015

There is a e2e test already which checks that cpu resources are validated: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/scheduler_predicates.go#L236

@karlkfi karlkfi changed the title E2E Tests: Resource constraints [mesos] Report and enforce resource constraints Sep 8, 2015
@jdef
Copy link
Contributor

jdef commented Sep 24, 2015

perhaps if we had a mesos e2e test that ran the petstore example we'd meet the test-for-containment goals of this ticket

@sttts
Copy link
Contributor

sttts commented Sep 24, 2015

I still don't see what we need in addition to the mentioned test above. How does the petstore help?

What we really would need in addition would be a real OOM triggering test.

@jdef
Copy link
Contributor

jdef commented Nov 25, 2015

We just spent a bunch of time on our scheduler w/ respect to Mesos "roles" support (think static resource reservations per node, per "role"). One point that came up was related to resource limits vs. requested. Right now the scheduler attempts to allocate resources according to max(limits,requested) with no way to accommodate situations when there are "requested" resources available but not "limits" quantities. There's an open Mesos JIRA for allowing the scheduler to request additional resources for tasks over time, and that seems like a good fit for a use case wherein our scheduler could allocate "requested" resources and then, over time, allocation additional resources up to "limits".

Another thought is to blend revocable and normal resources for tasks, such that "requested" resources come from the normal pool, and the difference (between requested and limits) come from the revocable pool. A fly in the ointment is that the Mesos QOS controller treats the entire container (executor) as revocable in this case and will heartlessly kill the whole thing if revocable resources need to be reclaimed (see http://mesos.apache.org/documentation/latest/oversubscription/).

@k8s-github-robot
Copy link

@jayunit100 There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@cmluciano
Copy link

/assign

@cmluciano
Copy link

/close please reopen on kube-on-mesos

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

8 participants