-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump the pod memory to higher levels to work on power #73016
Conversation
/sig testing |
/assign @vishh |
/priority important-longterm |
/assign @Random-Liu |
/sig api-machinery |
/test pull-kubernetes-e2e-kops-aws |
/cc @dims |
/assign @yujuhong @kubernetes/sig-node-pr-reviews |
/remove-sig api-machinery |
I think the summary API test only checks that the numbers are in the right range (with no intention to track usage regression), so bumping the number should be fine. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mkumatag, yujuhong The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Agree with the above, this is mostly meant to check "Is this about what we expect the pod to use?". The lower bound is also much more important than the upper bound for checking, so it is fine to raise, especially if it is failing on valid setups. |
assertion for pod spec is working fine but
code located at: https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/summary_test.go#L116:L118 IMO we should change the limit to |
@mkumatag That sounds fine to me. |
…16-upstream-release-1.12 Automated cherry pick of #73016: Bump the pod memory to higher levels to work on power
…16-upstream-release-1.11 Automated cherry pick of #73016: Bump the pod memory to higher levels to work on power
…16-upstream-release-1.13 Automated cherry pick of #73016: Bump the pod memory to higher levels to work on power
What type of PR is this?
/kind bug
What this PR does / why we need it:
ppc64le architecture machines by default pagesize is 64K (vs 4K on intel), this usually end up using more memory for th workloads, so to run this testcase successfully, need to increase the pod memory limits to higher lavel.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: