Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: add support for individual test function execution - expose testing.M.Tests #53701

Open
mattdbush opened this issue Jul 6, 2022 · 12 comments
Labels
Projects
Milestone

Comments

@mattdbush
Copy link

@mattdbush mattdbush commented Jul 6, 2022

Expose collection of defined Tests for individual function test execution so that boiler plate code can be added around execution of test.

Example:

func TestMain(m *testing.M) {
	setupAll()
	for t, _ := range m.Tests {
		setup(t)
		result := t.Run()
		if result.Passed {
			// handle passed test, discard resources etc
		}
		if result.Failed {
			// example scenario - forward log for immediate notification/action
			// preserve/copy resources for troubleshooting inspection.
		}
		teardown(t)
	}
	teardownAll()
	os.Exit(code)
}
@gopherbot gopherbot added this to the Proposal milestone Jul 6, 2022
@seankhliao
Copy link
Member

@seankhliao seankhliao commented Jul 6, 2022

Duplicate of #27927

@seankhliao seankhliao marked this as a duplicate of #27927 Jul 6, 2022
@seankhliao seankhliao closed this as not planned Jul 6, 2022
@mattdbush
Copy link
Author

@mattdbush mattdbush commented Jul 8, 2022

@seankhliao the original issue #27927 was only closed due to "age". It was and is still useful, please note that many other frameworks and languages have the feature requested on #53701, which is to enable code to be written before and after execution of each individual test executes. Why do you/go team feel this feature is not useful in the go context?

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jul 8, 2022

Reopening.

That said, this proposal needs to clarify the type of the elements of m.Test. The sample code is calling t.Run, but that is clearly not the method testing.T.Run.

@ianlancetaylor ianlancetaylor reopened this Jul 8, 2022
@ianlancetaylor ianlancetaylor added this to Incoming in Proposals Jul 8, 2022
@rsc
Copy link
Contributor

@rsc rsc commented Jul 13, 2022

We've talked in the past - #28592 (comment) - about exposing a list of tests and letting TestMain reorder and filter it. I'm not as sure about exposing .Run itself. What if the test calls t.Parallel? It doesn't seem like TestMain should be getting in the way of actual test execution.

Note that you can get this kind of per-test setup and teardown already by making a single test - func Test for example - and then give it subtests that you call t.Run for, checking the result. Parallel still throws a wrench into all of this.

@rsc
Copy link
Contributor

@rsc rsc commented Jul 13, 2022

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

@rsc rsc moved this from Incoming to Active in Proposals Jul 13, 2022
@rsc
Copy link
Contributor

@rsc rsc commented Jul 20, 2022

If this issue is about per-test setup and teardown it seems like that is already provided by wrapping t.Run.
In that case it seems like there's nothing more to do here.

If that's not right, and there's more to do here, what exactly should we do? What API should we add and why?

@meling
Copy link

@meling meling commented Jul 20, 2022

Looking at the initial example, there is an idea for inspecting the result of individual tests. I’ve long wanted a way to do that for my autograder tool. Currently, I have to carefully insert code before every t.Error and t.Fatal. Having a way to inspect the result of a test programmatically would allow doing my thing only once… instead of for every test failure location.

Not sure if that’s what OP wanted with the proposal.

@rsc
Copy link
Contributor

@rsc rsc commented Jul 27, 2022

@meling, If you call the tests with t.Run (as subtests), t.Run returns a boolean saying whether the test failed or not.

@rsc
Copy link
Contributor

@rsc rsc commented Jul 27, 2022

It sounds like maybe there's nothing to do here.

@mattdbush
Copy link
Author

@mattdbush mattdbush commented Jul 28, 2022

@rsc the use of t.Run is used in context of running sub-tests that are specific to a topic or functional area. The test startup and shutdown code wanted by this proposal is to provide common/cross-cutting concerns, and would want these functions applied to all unit tests and potentially all sub-tests.

So for example if we wanted to forward the test log for all individual tests (or all failed tests) to a communication service such as Slack and do this immediately when each test is executed. At present we have to parse log at end and forward at end of test runs. So rather than interact with unit test output as a whole logged file we could interact with test results and log output individually each test.

The benefit sought by this proposal is to avoid adding lots of boiler plate code inside our tests and around our sub-tests but to define common behaviours at a top level which improves the testing effectiveness and would make our CI/CD smoother.

@rsc
Copy link
Contributor

@rsc rsc commented Aug 3, 2022

So for example if we wanted to forward the test log for all individual tests (or all failed tests) to a communication service such as Slack and do this immediately when each test is executed. At present we have to parse log at end and forward at end of test runs. So rather than interact with unit test output as a whole logged file we could interact with test results and log output individually each test.

It seems like for this kind of use, it would work to use 'go test -json', which does stream test results out as they happen, with a parent process that reads the JSON and does whatever notifications are appropriate.

@mattdbush
Copy link
Author

@mattdbush mattdbush commented Aug 4, 2022

@rsc I just provided that as one example. Other cross cutting concerns would be automatically check memory usage, goroutine leak detection, function timings, log stitching etc. What you describe is doing things at the command line outside of the running unit test code. There is a lot of useful things that can be done when interacting with each individual test (before and after each test) from inside the running golang test process. Our unit tests call other servers/micro services and we generate unique test trace ids for each unit test, it would be very useful for us to stitch those logs together and report it all with the one unit test log. Please understand this is just one use case/example.

Many/most other language unit testing frameworks have this ability I am confused why you think this type of feature is not seen as important for the golang language.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

No branches or pull requests

6 participants