-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: testing: add support for individual test function execution - expose testing.M.Tests #53701
Comments
Duplicate of #27927 |
@seankhliao the original issue #27927 was only closed due to "age". It was and is still useful, please note that many other frameworks and languages have the feature requested on #53701, which is to enable code to be written before and after execution of each individual test executes. Why do you/go team feel this feature is not useful in the go context? |
Reopening. That said, this proposal needs to clarify the type of the elements of |
We've talked in the past - #28592 (comment) - about exposing a list of tests and letting TestMain reorder and filter it. I'm not as sure about exposing .Run itself. What if the test calls t.Parallel? It doesn't seem like TestMain should be getting in the way of actual test execution. Note that you can get this kind of per-test setup and teardown already by making a single test - func Test for example - and then give it subtests that you call t.Run for, checking the result. Parallel still throws a wrench into all of this. |
This proposal has been added to the active column of the proposals project |
If this issue is about per-test setup and teardown it seems like that is already provided by wrapping t.Run. If that's not right, and there's more to do here, what exactly should we do? What API should we add and why? |
Looking at the initial example, there is an idea for inspecting the result of individual tests. I’ve long wanted a way to do that for my autograder tool. Currently, I have to carefully insert code before every t.Error and t.Fatal. Having a way to inspect the result of a test programmatically would allow doing my thing only once… instead of for every test failure location. Not sure if that’s what OP wanted with the proposal. |
@meling, If you call the tests with t.Run (as subtests), t.Run returns a boolean saying whether the test failed or not. |
It sounds like maybe there's nothing to do here. |
@rsc the use of So for example if we wanted to forward the test log for all individual tests (or all failed tests) to a communication service such as Slack and do this immediately when each test is executed. At present we have to parse log at end and forward at end of test runs. So rather than interact with unit test output as a whole logged file we could interact with test results and log output individually each test. The benefit sought by this proposal is to avoid adding lots of boiler plate code inside our tests and around our sub-tests but to define common behaviours at a top level which improves the testing effectiveness and would make our CI/CD smoother. |
It seems like for this kind of use, it would work to use 'go test -json', which does stream test results out as they happen, with a parent process that reads the JSON and does whatever notifications are appropriate. |
@rsc I just provided that as one example. Other cross cutting concerns would be automatically check memory usage, goroutine leak detection, function timings, log stitching etc. What you describe is doing things at the command line outside of the running unit test code. There is a lot of useful things that can be done when interacting with each individual test (before and after each test) from inside the running golang test process. Our unit tests call other servers/micro services and we generate unique test trace ids for each unit test, it would be very useful for us to stitch those logs together and report it all with the one unit test log. Please understand this is just one use case/example. Many/most other language unit testing frameworks have this ability I am confused why you think this type of feature is not seen as important for the golang language. |
This proposal doesn't seem like it is converging to something concrete. @mattdbush, can you state what the API is that you think we should add? And note that any API that appears to operate on individual tests needs to account for the Parallel method, which ends up letting multiple tests run at the same time. |
@rsc the proposal is for the exposing of the internal list of test functions which is defined as At present the Once func TestMain(m *testing.M) {
setupAll()
var tests []InternalTest = m.Tests
// run tests directly and do pre + post test operations.
m.EnableExplicitRun
for _, test := range tests {
// individual test setup
setup(test)
var result bool
result := t.Run(test.Name, t.F)
if result {
// handle passed test, discard resources etc
} else {
// example scenario - forward log for immediate notification/action
// preserve/copy resources for troubleshooting inspection.
}
// individual test cleanup / post test handling
teardown(t)
}
teardownAll()
os.Exit(code)
} Golang coders can then choose to use the internal opaque execution which is useful and adequate for many situations but there can be extra value/control given to running the tests directly/explicitly and means |
@mattdbush, you can already do setupAll/teardownAll in TestMain. |
Based on the discussion above, this proposal seems like a likely decline. |
If for each individual test they could optionally execute a This would enable golang tests to have these functions optionally assigned in the The log forwarding was just one example, other examples/use cases are:
|
It seems like there's an assumption inherent in this proposal that all tests belonging to a particular package will have something in common and that it's profitable to factor out whatever that is into some common location. If so: can you say more about why an entire package is the right granularity for whatever common behavior you want to factor out? Isn't it possible/likely that there will be at least a few tests that are somehow different than the others, for which the setup/teardown would be unnecessary or inappropriate? I think there are some other possible options too, which would have different benefits and drawbacks:
If there is some need to "program with tests" (to use test cases as data so that we can metaprogram them) then I think it would be good to establish what is the most useful level to do that at, rather than just assuming that a whole package is the right answer because it happens to align with the granularity of today's Separately, the need to support It seems like the current design hinges on the main loop belonging to |
No change in consensus, so declined. |
The purpose of this request was to have the test execution exposed and not
use the sub-test, because sub-tests are grouped under functional topics and
the test rig surrounding them are assumed at this topics. Whereas what is
raised here is cross-cutting pre-test execution and post test assessment
which is topic (function) agnostic. Using sub- tests means repeating pre
and post test as boiler plate code.
…On Thu, 28 July 2022, 3:20 am Russ Cox, ***@***.***> wrote:
It sounds like maybe there's nothing to do here.
—
Reply to this email directly, view it on GitHub
<#53701 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADIUYOK4LAKMRDLMSQUFNXDVWFVUPANCNFSM52ZGCFCQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Expose collection of defined
Tests
for individual function test execution so that boiler plate code can be added around execution of test.Example:
The text was updated successfully, but these errors were encountered: