Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
proposal: testing: programmatic sub-test and sub-benchmark support #12166
Support for table-driven tests and benchmarks could be improved by allowing subtests and subbenchmarks to be spawned programmatically by allowing, for example, a Run method on T and B in the testing package. This would enable various use cases, such as:
Semantically, subtests and subbenchmarks would behave exactly the same as the current top-level counterparts. In fact, the top-level counterparts could be implemented as subtests/benchmarks of a single root. Details would need to be worked out in a design doc.
@josharian. Browsing through the issues: #12145, #9268 (subtests). For many of the related requests Russ mentions no Github issues were filed (that I'm aware of). This includes things like as the ability to call Fail a single iteration of a table-driven test, various ways to run things in parallel, better way to structure benchmarks, among others.
I have read the design doc and find this idea very intriguing. I have test code that could be improved with the use of the proposed features.
I find the discussion of subbenchmarks in the proposal somewhat lacking in context. The example refers to variables
I would like to see some discussion of how subtests can be accomplished with the existing testing package. How far can we get without this proposal? I have some idea from my own experience, but I think the proposal should make the comparison more directly.
In short, I like what I see, but we should be thorough.
@ChrisHines: a commonly suggested alternative approach using the existing testing package is to write functions and then top-level testing functions that simply pass parameters. This gets most of the functionality, but results in considerably larger and more awkward code. It is possible to generate such code, but this is not necessarily less awkward. I'll update the doc.
See https://go-review.googlesource.com/#/c/2322/4/unicode/norm/normalize_test.go for a rather elaborate example. For example, the benchmark code is considerably shorter and more data-driven.
The use of Run for tests in this CL is excessive and unnecessary but was done to explore its boundaries a bit.
referenced this issue
Oct 24, 2015
Thanks for writing this up, and my apologies for not getting a chance to review it sooner.
This proposal makes multiple significant changes and additions to the testing package. The overall spirit seems to me to move away from the current minimalist spirit of package testing. I'd like to keep that spirit but still get most of the benefits of this proposal. To that end, I have a few suggestions: