-
Notifications
You must be signed in to change notification settings - Fork 18k
proposal: testing: flag to continue testing after -failfast failure #61009
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This seems like it is only relevant in packages where the tests are very slow. In most of the code I work on, running all the tests again would be quicker than typing out all the right flags to selectively re-run the tests that weren't run the first time. In addition, if you're making edits to the code to fix the first failure, it's possible to introduce new failures, so it seems to me that I'd want to run all the tests again anyway (even if they're slow). I also think this is a fairly special-purpose flag that wouldn't be used much but would add to the already considerable weight of documentation for the testing package, |
Yes, this change is aimed at packages with slow tests. Integration tests for the package I'm working on right now take 15-20 minutes to run in the environments we generally test in.
The point here is to reduce this to a single flag.
Running all the tests again once makes sense. But that's not what happens in the situations this is aimed at; the early tests end up getting run far more times than the later tests.
This could be documented like |
In addition to I will also say that the fact that this proposed flag is intertwined with the
In this workflow:
you run the tests once each time you change something, which is generally the best approach IMO. Each edit is an opportunity to break a previously-passing test. If your tests take many minutes to run, then I could see why you'd want to avoid re-running the tests after every change. Maybe you want to re-test only the just-fixed test and then only do one big re-run at the end or something like that. But that workflow seems fairly specialized and not something we need to support specially with If I were facing a similar situation some options I'd think about would include:
|
This feature isn't aimed at you. For people who are already
I agree this might be confusing. Another approach could be to have a new
Yes. Exactly that.
I propose that part of that specialization is because we don't have this feature. People's testing habits are based on the features available to them.
It's a long list. And yes, that's what I'm doing now. My current plan is to put together a shell script that does a bunch of
My integration tests are mostly of similar length. There are no slow tests to omit or t.Skip. I'd be doing it for every test before the now-passing one, which just brings us back to the original problem. Also, this isn't just about the project I'm working on right now. The same problem exists in half the closest dozen related projects (mostly written by substantially different people / teams / etc). |
Also, a lot of the suggested alternatives become much more tedious/difficult when |
How should handle this handle tests that call |
I don't think making a test parallelizable affects the order in which tests are launched, and it's that order that this option would respect. Do you have a particular scenario in mind? |
There is also an analogous interaction with package patterns. ( See: |
For parallel tests, there could be some overlap. If TestA, TestB, TestC are run in parallel and TestB fails, then starting over at TestB would run TestC again. Running tests across packages, as long as -failfast doesn't stop other packages from testing, would also produce some repeat tests. However, both of these cases would still produce fewer repeat tests than running all tests again. |
Right now parallel tests work by starting the test and then immediately suspending it. Once all the non-parallel tests have completed, the parallel tests are started, with the number of tests run concurrently controlled by the |
Thanks. That wasn't clear to me. You're right that this would not fit cleanly here, but it would still reduce the number of tests run in most such cases. |
The motivation here is unclear, or at least unconvincing. If you want to see all the other test failures, why use -failfast in the first place? |
I've added a "Goal" section to the original post to clarify motivation, as follows:
My desired workflow, which I believe would be the fastest path to success in the scenarios I'm concerned with, is as follows:
In a perfect world, 6 wouldn't re-run the tests that passed in the last iteration of 5, but that would be more complex and less valuable. |
Problem
After using
go test -failfast
to find the first failing test, then fixing the failure, there is no straightforward way to resume the tests from the point of failure. I have to either run all the earlier tests again, or enumerate the list of tests that come after the failing test.Goal
Reduce the total time to resolve multiple bugs that each affect one or more tests but not most or all tests. Reduce the number of times each test is run.
Proposal
Add a flag such that
-run TestFoo
will not only run all tests matchingTestFoo
but all tests after the first such match.Example
I didn't want to run TestA or TestB again here, or TestC an extra time. I would rather have done something like
(I acknowledge that's not a good name for the flag)
The text was updated successfully, but these errors were encountered: