-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated Sig-windows Memory Limits tests to not assume all nodes are the same #107477
Conversation
@NikhilSharmaWe: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
9c27fd7
to
7927057
Compare
}, | ||
} | ||
f.PodClient().Create(&pod) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsturtevant Is this is right method for adding image name, node selector in Pod spec. If yes, Could you please tell about how to find the correct memory to be assigned to each pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is how you would create a pod. You will want to capture the result and to verify the pod started:
kubernetes/test/e2e/windows/dns.go
Lines 61 to 62 in 7dd03c8
testPod, err = f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(context.TODO(), testPod, metav1.CreateOptions{}) | |
framework.ExpectNoError(err) |
Add memory limit like this:
kubernetes/test/e2e/windows/cpu_limits.go
Lines 127 to 132 in 7dd03c8
Resources: v1.ResourceRequirements{ | |
Limits: v1.ResourceList{ | |
v1.ResourceMemory: memLimitQuantity, | |
v1.ResourceCPU: cpuLimitQuantity, | |
}, | |
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can get that by querying the node for allocatable memory see an example in the getTotalAllocatableMemory
function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I think we have to add only
v1.ResourceMemory: status.Allocatable[v1.ResourceMemory]
inLimits: v1.ResourceList
.
Am I thinking correct here, do we also have to add value for v1.ResourceCPU
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes we only need memory in this case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsturtevant in case of pod
we can add v1.ResourceMemory: status.Allocatable[v1.ResourceMemory]
for each node.status
, but what should be added as memory for failurePod
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it can be anything > than what is left over in the total Allocatable in the cluster for windows, since we have used it all up. something like 500MB or 1GB should be good.
2050e35
to
e60d904
Compare
test/e2e/windows/memory_limits.go
Outdated
}, | ||
}, | ||
} | ||
pod, err = f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(context.TODO(), &pod, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsturtevant I think &pod
should be correct argument for this case and &failurePod
for the next case but it is showing an error. It is saying that pointer type should not be used, but in this example pointer to v1.Pod
type is used in the argument.
What is the cause of the error here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsturtevant Could you please give your thoughts on the above comment ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod, err = f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(context.TODO(), &pod, metav1.CreateOptions{}) | |
pod, err = f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(context.TODO(), pod, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This with the change https://github.com/kubernetes/kubernetes/pull/107477/files#r798791328 should work
ginkgo.By(fmt.Sprintf("Deploying %d pods with mem limit %v, then one additional pod", allocatablePods, memPerPod)) | ||
|
||
// these should all work | ||
pods := newMemLimitTestPods(allocatablePods, imageutils.GetPauseImageName(), podType, strconv.FormatInt(memPerPod, 10)) |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsturtevant Yes true,
Could you please help me with this error
- test/e2e/windows/memory_limits.go:132:3:
ineffectual assignment to pod (ineffassign)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from https://github.com/gordonklaus/ineffassign
An assignment is ineffectual if the variable assigned is not thereafter used.
you are assigning pod but it's not used again. you can use _
instead as an example.
32e2051
to
003864a
Compare
003864a
to
c34fe0d
Compare
@jsturtevant I think PR is ready to merge, please inform if it need further improvements. |
/cc |
/assign |
framework.ExpectNoError(err) | ||
} | ||
failurePod := &v1.Pod{ | ||
Spec: v1.PodSpec{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both this and the other pod spec are missing the pod name and container name (see the original pod spec definition for reference)
I would suggest this one be called something like "mem-failure-pod" and the other pods be the "mem-test-"
}, | ||
for _, node := range nodeList.Items { | ||
status := node.Status | ||
pod := &v1.Pod{ | ||
Spec: v1.PodSpec{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this pod spec should also include a node: node.Name
field to make it deploy to that node
1f92873
to
54584fe
Compare
Thanks for sticking with this one! The tests passed on a cluster with different size VMs:
|
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jsturtevant, NikhilSharmaWe The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-kubernetes-node-e2e-containerd |
unrelated flakes |
What type of PR is this?
Fixing Bug
/kind bug
What this PR does / why we need it:
Updated Sig-windows Memory Limits tests to not assume all nodes are the same.
Which issue(s) this PR fixes:
Fixes #106608
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: