-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: e2e: HPA ContainerResource - Lower requests b/c multiple containers will leave pending pods on existing test infra #104441
test: e2e: HPA ContainerResource - Lower requests b/c multiple containers will leave pending pods on existing test infra #104441
Conversation
@jsturtevant: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc @viveksyngh |
@jsturtevant: GitHub didn't allow me to request PR reviews from the following users: viveksyngh. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @bskiba |
I'm in favor of using the same values for Linux and Windows but if for some reason anyone objects we could leave the linux values as default and use these updated values if |
@krzysied, @josephburnett would one of you be able to take a look? |
/assign |
targetCPUUtilizationPercent: 20, | ||
minPods: 1, | ||
maxPods: 5, | ||
firstScale: 3, | ||
firstScaleStasis: stasis, | ||
cpuBurst: 700, | ||
cpuBurst: 500, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why every other value was divided by 2 and this one was changed differently?
(I understand it works anyway cause we set the max pods - I want to understand why "scaling" the test was not an option)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe I miss understood this value, I though it was the amount of cpu put on to the pods to force scaling the pods. I thought as long as this value is above the cpu request it would cause scaling. Leaving it at 700 would work as well but I though should dial back to keep in line with other values.
I want to understand why "scaling" the test was not an option
I don't understand what you mean by "scaling" the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I confirmed this value is the amount of cpu to consume on the container:
rc.ConsumeCPU(scaleTest.cpuBurst) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that burst number was calculated to trigger exactly 5 replicas (even if max pods number were higher). But it seems that it was previously adjusted for 7 replicas which is above the limit. My bad.
By "scaling" the test I meant diving each cpu value by 2, so the ratio will be conserved.
/lgtm |
/assign @bskiba |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bskiba, jsturtevant, krzysied The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind failing-test
What this PR does / why we need it:
It lowers the requests for the HPA when testing the the side car resource consumption.
The existing tests
Kubernetes e2e suite.[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
works on the current infra with the same number for CPU requests.Since the new tests adds a second container for each pod it double the amount of cpu required and causes the pod to go to pending and the test to timeout.
Which issue(s) this PR fixes:
Fixes #104427
Special notes for your reviewer:
Another option considered was to increase the testing infra VM size. I assume other might have smaller sized vms for testing as well and this would break others. This tests the same functionality but does so with out requiring a change infrastructure.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/sig testing
/sig autoscaling
/sig windows