New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deflake TestExpectationsOnRecreate #93617
Conversation
/retest |
1 similar comment
/retest |
/retest |
/skip |
cc @kubernetes/sig-apps-pr-reviews |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liggitt thanks for looking into the test and finding out the cause!
I've sketched an alternative in #93725 while I was debugging the test with handling the addNode
case (https://github.com/kubernetes/kubernetes/pull/93725/files#diff-54638f25c4f816a4ba21c67ef036c363R504)
I am also fine going with this PR and picking it or addressing that as a follow up.
/approve
if actual := dsc.queue.Len(); actual != expected { | ||
t.Fatalf("expected queue len to remain at %d, got %d", expected, actual) | ||
} | ||
time.Sleep(10 * time.Millisecond) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wonder if we need the wait, it gets some chance at catching potential error (but we are only oscillating between 0 and 1) but would't the same be seen with go test -count=100 -parallel=100
? Although the cost isn't high.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm mostly trying to avoid hotlooping the CPU
t.Fatal("Unexpected item in the queue") | ||
// process updates DS, update adds to queue | ||
waitForQueueLength(1, "updated DS") | ||
ok = dsc.processNextWorkItem() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be better just to
dsc.queue.Get()
dsc.queue.Done(key)
to avoid possible side effects of sync, although those should be constant on second run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to avoid possible side effects of sync, although those should be constant on second run
I actually wanted to exercise that and verify a requeue doesn't occur, since a second sync is really what happens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose I prefer a separate test case but it's fine
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: liggitt, tnozicka The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/retest |
@liggitt: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
What type of PR is this?
/kind flake
What this PR does / why we need it:
Fixes a flake in TestExpectationsOnRecreate due to the test not considering a requeue caused by the update of the DS during the processing of the initial create
Which issue(s) this PR fixes:
xref #93605
fixes #93604
Does this PR introduce a user-facing change?:
/sig apps
/cc @tnozicka @janetkuo