-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e: Fix ResourceConsumer unstable request interval #108104
e2e: Fix ResourceConsumer unstable request interval #108104
Conversation
Setting a new consumption target in autoscaling.ResourceConsumer caused the internal sleep duration between consumption requests to reset. The next consumption would then get delayed, starting after a gap of 0-30s.
|
@pbetkier: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Welcome @pbetkier! |
Hi @pbetkier. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig autoscaling |
/ok-to-test |
Someone from SIG autoscaling needs to look at this, I don't know this code. |
For better explanation: sample of logs before the fix (should send requests every 30s, irrelevant logs omitted):
and after the fix:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgmt
/approve
/lgtm |
@pohly got lgtm from autoscaling member, could you approve? |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jbartosik, pbetkier, pohly The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Setting a new consumption target in
autoscaling.ResourceConsumer
(a call e.g. torc.ConsumeCPU(...)
) caused the internal sleep duration between consumption requests to reset. The next consumption would then get delayed, starting after a gap of 0-30s.This bug is not visible in the current e2e tests, because they only loosely validate the scaling behavior – waiting a long time for the number of replicas to eventually reach a given target. Even if the gap in the resource consumption causes a bad recommendation, then that only makes the test to run a bit longer. However, it impacts the tests I'm about to add for #102369, as they will validate exactly how the target is reached.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: