-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flakey tests #10807
Comments
This is caused by #10768 - Until kit is fixed or reverted out, this will continue to happen. From your unit test logs:
|
https://github.com/argoproj/argo-workflows/actions/runs/4592799925/jobs/8110154390 It's not related to kit. Are you looking at somewhere else? |
I was looking at the failed e2e test, apologies. |
Also flaky e2e test test-executor https://github.com/argoproj/argo-workflows/actions/runs/4608012677/jobs/8143253843:
|
Flaky test-cli https://github.com/argoproj/argo-workflows/actions/runs/4639608340/jobs/8210973276
|
test-cli: https://github.com/argoproj/argo-workflows/actions/runs/4663229967/jobs/8254446014
|
@GeunSam2 Would you like to take a look at this one? It seems pretty consistent. Another example https://github.com/argoproj/argo-workflows/actions/runs/4670440612/jobs/8270253757 |
Okay I'll check the reason of hooks test failing. |
|
|
|
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
|
|
Another one: #11384 |
Hooks tests are very flaky. Disabled them for now. Need to investigate potential bugs:
cc @toyamagu-2021 Would you like to help us debug these since you added these tests? (after you wrap up with the UI issues) |
Yes, of course. |
Adding Also this is technically a duplicate issue of #9027. They've got different flakes listed in each, but could consolidate into one issue |
I think we should really prevent these flaky tests being merged in the first place. @terrytangyuan and @agilgur5 what are your opinions on running the test suite in parallel 10 (or so) times and only allowing merging when for all runs the tests passed? If we can launch the jobs in parallel, we shouldn't suffer any wait time increases. We probably would need to pay for the extra compute, but I suspect it'd be cheaper than the person hours that go into dealing with flakey tests. |
Ostensibly yes, but the average wait time would increase since some jobs queue longer than others and some wait on network longer etc. This would probably put us over the limit of parallel jobs more frequently, causing more queueing as well
I don't think this would actually help solve the problem. We're taking somewhat inaccurate flakey tests usually caused by race conditions and taking an even more inaccurate approach to it of "run all tests more times". Most PRs don't even change the tests much, if at all, but they will fail more often with a change like this.
which would cause a lot of hours of investigation or confusion due to the existing flakes that were not caused by new code. that's the current biggest issue, and this would increase that. I think we should be more precise in our approach. So if we wanted to take an approach like this I would recommend one of:
|
I presume we are paying more capacity here, but I can't see the time increasing by that much, sure some pipelines will take a bit longer but thats fine as long as the wait times generally are similar to what we have now.
I see where you are coming from, I kind of elided the fact that when we implement this, we should have no more flakey tests.
This effectively what I am saying I suppose, but to be more precise my suggestion is Some kind of flake test detection would be nice to have as well. |
As far as I know, we're not currently paying anything and are on the free plan. There are concurrency limits that apply per plan (and I believe they apply to the entire GH org, not per repo). If we run 140+ (10 * (13 E2Es + 1 unit tests)) more jobs per run, we will almost certainly hit that limit, which will cause queueing, i.e. some parallel jobs will end up running sequentially, which will definitely increase wait times. It also may increase wait times across the
Tall ask -- will this ever be true? 😅
I wrote to only run the new tests themselves multiple times. But actually, rethinking this, neither of these would be correct; a source code change can cause a test flake in an existing test. E.g. a new unhandled race was introduced. That exact scenario has happened multiple times already I'm still thinking a nightly or weekly job would make more sense than on each PR. |
Pre-requisites
:latest
What happened/what you expected to happen?
Unit tests failed: https://github.com/argoproj/argo-workflows/actions/runs/4592799925/jobs/8110154390
Version
latest
Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.
See recent CI builds
Logs from the workflow controller
Logs from in your workflow's wait container
The text was updated successfully, but these errors were encountered: