-
Notifications
You must be signed in to change notification settings - Fork 17.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: testing: way to list tests and benchmarks without running them #17209
Comments
I would like this too for Go's distributed build system. It would make its test sharding more efficient. |
I propose this:
we add -test.dryrun or something like this to the testing package,
and it will output the list of matched tests and benchmarks.
I think it's more useful than simply listing all tests/benchmarks/
examples.
|
@minux, I'm not sure I see the difference. Why is that more useful? It sounds like you're just proposing a different flag name. |
the key difference is it displays only matched names, so
it will help debugging -test.run/-test.bench problems too.
|
I see. That's fine. I like the proposed behavior, at least, even if I'm not crazy about the name "dry run". But I suppose dryrun is consistent with a bunch of other tools. |
I agree being able to do filtering would be nice and I agree dryrun seems a bit off but as long as the functionality is there I am ok. If it is implemented with the filering I guess the best place to try to implement it would be in the RunTests, RunExamples and runBenchmarkInternal then? |
One complication is subtest and subbenchmarks. In general, we can't know if a test has any subtests without That is, to query all the tests, we basically have to run all One compromise solution is this: Any opinions on this? (Update:) |
cc @mpvl Agree that this can't work with subtests. Otherwise I think it makes sense. |
Indeed it won't work with subtests and only partially with benchmarks. I thus think that they "-dryrun" flag is a bit misleading. You can now achieve the same by using "-benchtime=1ns". This also more accurately indicates what is going on in reality. BTW, we actually thought for a moment to have the trial be N=0 instead of N=1, which would have made this more feasible, but it caused too many incompatibilities. Having a feature that just lists top-level tests, benchmarks, and examples without running might still make sense, but adding -dryrun for displaying either matched subtests or sub-benchmarks doesn't make sense to me, especially as you can't do anything more than what you can do today. I could imagine doing something like only "probing" tests/benchmarks if static analysis shows that it will call Run. This will go a long way in providing the wanted functionality , but I'm not sure that we want to go there. |
I don't understand why adding -dryrun doesn't do anything that you can't do
today. If that's the case, we might as well just close this issue because
one can always run the test binary with -test.v -test.bench=.
-test.benchtime=1ns and parse the output to see which
tests/benchmarks/examples are run.
|
Maybe I misunderstood, to just show the the top-level tests/benchmarks yes, it make sense, but not to list sub-benchmarks. I find it misleading to start running tests/bencmarks as soon as the pattern includes a '/', for example. It is not a dryrun anymore at that point and you can simulate the same by using the --benchtime=1ns. |
@minux Well that still runs the tests which is not wanted. A test that takes more than a trivial amount of time will block.
For my use case no filtering is needed and if filtering is required I think grep or other unix utilities would be just fine. Like you said filtering for subtests really is just showing the match for the top anyway. I think a simple loop around testing M.tests, m.Benchmarks and m.Examples is all that is really needed. Here is all i was originally imagining in m.Run()
I am not sure I see anything more than this is really nessisary. There are examples in other languages / test packages: In Gtests: https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md#listing-test-names This is just used our internal the test runner, which is only passed the compiled binary, to get a list of tests to run. |
The problem is not about filtering. It's that it's impossible to list
subtests/subbenchmarks without running the top-level tests, and benchmarks.
Just listing top-level tests is not that useful because it's a matter of go
list -f "{{.TestGoFiles}} {{.XTestGoFiles}}" and grep to achieve the same
without any new code.
|
Except my test runner doesn't actually interact with source code. It interacts with the test binary. |
You can also use nm(1) on the test binary and look for proper text symbols
if you only need top-level tests and benchmarks.
|
I can't get Examples that have output (i.e will be run like a test), but it is better than nothing i guess. |
Subtests make this impossible to do at subtest granularity, but we could still have -list=pattern list all the tests - Test*, Benchmark*, and Example* - that match. That seems fine. |
CL https://golang.org/cl/41195 mentions this issue. |
Proposal feature enhancement for discussion:
Proposal is to add two new flags: 'test.list' and 'test.listbench'. When executed from the test binary it will list all tests (including examples with output set) or benches
The tests will be printed to stdout with one test per line and then exit with a exit code of 0. No tests would be ran or any other operation. It would be nice to include maybe some filering, but it's not nessisary for my use case. This would probably be easiest just be added at the top of M.Run()
Use case. We have a custom test runner that unifies all languages test output and part of the procedure is test discovery. Right there is isn't a great way to get a list of tests that are included in a test binary.
Not included in this feature enhancement is any modifications to the 'go test' tool to map through the setting .
The text was updated successfully, but these errors were encountered: