-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rearrange parallelism in AKS machine pool e2e tests #4874
Conversation
@@ -47,109 +47,92 @@ func AKSMachinePoolSpec(ctx context.Context, inputGetter func() AKSMachinePoolSp | |||
input := inputGetter() | |||
var wg sync.WaitGroup | |||
|
|||
originalReplicas := map[types.NamespacedName]int32{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of the changes in this file are indentation, so I'd suggest ticking the "hide whitespace" checkbox.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4874 +/- ##
==========================================
+ Coverage 62.01% 62.03% +0.02%
==========================================
Files 201 201
Lines 16860 16878 +18
==========================================
+ Hits 10455 10470 +15
- Misses 5622 5625 +3
Partials 783 783 ☔ View full report in Codecov by Sentry. |
/cherry-pick release-1.15 Skipping 1.14 because I haven't seen the flakes there and this will probably not cherry pick cleanly and I'm lazy. |
@nojnhuh: once the present PR merges, I will cherry-pick it on top of release-1.15 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
AKS job failure looks like the same flake I've been seeing recently and these changes definitely helped narrow it down! I have a fix for that open in #4875. In the meantime, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
LGTM label has been added. Git tree hash: 544128ceca1d553fecf79d1dee8299175a28e3b0
|
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jackfrancis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
@nojnhuh: new pull request created: #4877 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
I have been seeing some flakes in the e2e tests for AKS machine pool scaling recently and it's hard to know exactly what's going wrong because of the way Ginkgo handles how these tests are parallelized (I think).
The tests are currently structured like this:
It seems when Ginkgo sees a failure in any of these steps, the test continues to the next step anyway, which can cause some misleading error messages and the collected artifacts don't reflect the state of the resources at the time the test failed.
This PR reformulates the tests to work like this:
This way, Ginkgo does short-circuit when any of the intermediate steps fail because all the steps are done in the same goroutine for an individual MachinePool.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
TODOs:
Release note: