-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
for every huge page resource, we need to remove it from allocatable memory when Updating Node Allocatable limit across pods #86758
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@boddumanohar I see you commented on the issue - do you mind weighing in here? From your comment, I'm having a little trouble determining if you were in favor of, or opposed to, this proposed change :)
35d124f
to
b2865ab
Compare
b2865ab
to
706dcaf
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mysunshine92 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind bug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mysunshine92 do we have tests for this change?
706dcaf
to
90cf46d
Compare
I have test on my k8s 1.13 cluster,it's ok. |
/test pull-kubernetes-e2e-gce |
1 similar comment
/test pull-kubernetes-e2e-gce |
Gotcha! Thoughts on if unit tests could also be beneficial? The advantageous aspect of unit tests is that they will run continuously, ensuring this feature continues to work in the future. |
…emory when Updating Node Allocatable limit across pods
/test pull-kubernetes-e2e-gce |
@mysunshine92: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/test pull-kubernetes-e2e-gce |
/lgtm |
@mysunshine92: you cannot LGTM your own PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/priority backlog |
/assign @vishh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly the same as we do when calculating node.Status.Allocatable[v1.ResourceMemory]
here: https://github.com/kubernetes/kubernetes/blob/0599ca2/pkg/kubelet/nodestatus/setters.go#L354-L366
Think I would prefer a test in order to avoid regressions tho.
Other than the comments this looks good to me
/priority important-longterm
@@ -190,6 +191,21 @@ func (cm *containerManagerImpl) getNodeAllocatableAbsoluteImpl(capacity v1.Resou | |||
} | |||
result[k] = value | |||
} | |||
|
|||
// for every huge page reservation, we need to remove it from allocatable memory | |||
for k, v := range result { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for k, v := range result { | |
for k, v := range capacity { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should be calculated by capacity not allocatable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since memory uses does not include hugepage uses,here we should use allocatable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since memory uses does not include hugepage uses,here we should use allocatable
Not sure if I follow. If we reserve huge page memory in system-reserved and/or kube-reserved, we would like to decrement by all huge page memory, not only allocatable. Or?
// for every huge page reservation, we need to remove it from allocatable memory | ||
for k, v := range result { | ||
if v1helper.IsHugePageResourceName(k) { | ||
allocatableMemory := result[v1.ResourceMemory] |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
gentle ping @mysunshine92. Would be nice to get this into 1.18 😄 |
please add lgtm labels,thanks |
ping @mysunshine92 |
@mysunshine92: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close |
@mysunshine92: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
we need to remove huge page resource from allocatable memory when Updating Node Allocatable limit across pods.
here:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/node_container_manager_linux.go#L177
Which issue(s) this PR fixes:
Fixes #84426
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: