-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix local isolation for pod requesting only overlay or scratch #47179
Fix local isolation for pod requesting only overlay or scratch #47179
Conversation
Hi @ddysher. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
fits: false, | ||
test: "request exceeds allocatable", | ||
reasons: []algorithm.PredicateFailureReason{ | ||
NewInsufficientResourceError(v1.ResourceStorageScratch, 18, 5, 20), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied this from other test cases, but do not understand why v1.ResourceStorageScratch
is used in test instead of v1.ResourceStorageOverlay
?
@kubernetes/sig-scheduling-pr-reviews , anyone have related knowledge for |
v1.ResourceStorageScratch is added in v1.7 to represent the capacity of local storage for scratch space (e.g., used for emptyDir, and also overlay and imagefs if no separate imagefs setup). So ResourceStorageScratch represents the capacity, and ResourceStorageOverlay represents one type of request |
/approve |
/lgtm. |
@davidopp Could you please help approve this PR? Thanks! |
@k8s-bot ok to test |
/lgtm Thanks for the fix @ddysher |
/release-note-none |
hm.. let me check |
lgtm; this PR add a check for storage request, only return true if all (cpu, memory, storage, opq) request resources are zero. But there's some comments about the previous PR (#46456):
I'll approve this PR and create another one for those two cleanup comments. |
/approve |
/approve no-issue |
emptyDirLimit: 15, | ||
storageMedium: v1.StorageMediumMemory, | ||
nodeInfo: schedulercache.NewNodeInfo( | ||
newResourcePod(schedulercache.Resource{MilliCPU: 2, Memory: 2, StorageOverlay: 5})), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a description for this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done. it's just a one line change, so I force update the second commit.
lgtm, just add a description of the new case :). |
1508248
to
3cecb07
Compare
description added, but it's not a new case though. git diff is misleading
|
/lgtm |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ddysher, jingxu97, k82cn, vishh Associated issue requirement bypassed by: k82cn The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
@ddysher can you file an issue to get release approval? |
Fix is low risk, approved for 1.7 |
CI seems to be stalled. /test pull-kubernetes-federation-e2e-gce |
Automatic merge from submit-queue (batch tested with PRs 47883, 47179, 46966, 47982, 47945) |
What this PR does / why we need it:
Fix overlay resource predicates for pod with only overlay or scratch storage request.
E.g. the following pod can pass predicate even if overlay is only 512Gi.
similarly, following pod will also pass predicate
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #47798Special notes for your reviewer:
Release note:
@jingxu97 @vishh @dashpole