-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: testing: expose testing.T constructor so a test with subtests can be verified to fail #39903
Comments
Another limitation of the |
To summarize briefly, #28021 was about how to test a testing helper that takes a *testing.T. The answer was to make it take a testing.TB instead, and then pass any mock implementation of testing.TB (embedding a nil testing.TB to satisfy the private method). Or define a custom interface that has only the methods needed (like Errorf, Fatal). The problem raised in this issue is that the mock implementation strategy does not work for helpers that use t.Run, because there is no way to write an interface for Run compatible with testing.T's Run, unless Run takes a *testing.T. There are at least two possibilities:
It's unclear to me exactly how much complexity option 2 would introduce, and how much benefit it would provide. Without that, it's hard to do the cost-benefit analysis to see whether we should do it. What would a mockable *testing.T mean exactly? How many changes are involved? |
I haven't tried to actually do this, and my understanding of the current implementation may not be correct, but my impression is that it might not involve any changes to testing.T itself. Instead there would be a new function like this:
I'm imagining that
|
So, my original description isn't quite right— this would not actually be a constructor, because you wouldn't be doing this: fakeTestingT := testing.NewMockTesting()
MyTestFunction(fakeTestingT)
if fakeTestingT.Failed() { ... } Instead, it would be more like this: var result *testing.T := testing.RunDetached(MyTestFunction)
if result.Failed() { ... } Or, alternately: failed, skipped := testing.RunDetached(MyTestFunction)
if failed { ... } |
To verify that a test expected to fail actually fails, you can run the test binary as a subprocess and execute the subtest in question using the That achieves the goal of running the test using a fresh It's not exactly a smooth API, but the general pattern could be encapsulated in a package or function to make it smoother, and it would sidestep a lot of other subtle details. |
@bcmills Unless I'm misunderstanding, I don't see how you can encapsulate that in a way that can just be called from test code— it would require you to design your whole build around the fact that you're going to be using this, since you would need to have built a standalone test binary. It also means that the subtest you're running would need to have its inputs baked in (assuming they aren't just simple values that could be passed in an environment variable). That is, if what you want is to verify that It would get the job done, but it's elaborate enough that I think there would no longer be an advantage over just writing the relevant tests against some other abstraction instead of |
@bcmills As for |
Well, there is at least one other relevant mutable global, |
@eli-darkly, the usual pattern that @bcmills was alluding to is to do something like:
One benefit of this is that you don't have to find all the ways a test might fail hard, like panicking in a newly started goroutine, which you can't recover from. The OS takes care of that. |
@rsc I understood the general idea, although I didn't realize that Also, like I said, it means that you need a separate top-level test method for every kind of input that is supposed to cause a failure (unless it can be entirely represented in terms of environment variables) so that you must have a top-level
Well, at least with regard to panics, I don't consider that to be an advantage. Test code that I'm verifying in this way should not panic; if it does, I want that to bring everything down, so that I realize something's badly wrong. I don't want that to just look like a test failure. I don't want to keep arguing this at length, just wanted to be clear about what I meant. I realize that for many people this is just not a big deal and the workarounds that have been suggested are fine; I just wanted to raise it as a possibility, in case I was right that it wouldn't be terribly hard to implement in the standard library. Thanks for the thoughts and I'll keep an eye on this issue. |
@eli-darkly, I agree it's a little bit of a pain to make a subprocess, but there really isn't a clear way to address the problem otherwise right now. Until we get a clearly right design, we typically leave well enough alone. That seems like the best thing to do right now here. |
Based on the discussion above, this seems like a likely decline. |
No change in consensus, so declined. |
This is the same proposal as #28021 - but I'd like to raise it again because the proposed solution (passing in a
testing.TB
instead of a*testing.T
) isn't applicable to my use case.1. Basic scenario (same as previous proposal)
I've defined a component interface which may have many implementations. All of the implementations are expected to adhere to the same contract for various standard operations. So I've created a base test suite that, given an implementation instance, runs a standard set of contract tests against it. Something like this:
Now, if I'm going to tell everyone to use this test suite to validate their implementations, I want to make sure it is actually testing what it's supposed to be testing. So I have a test suite for the test suite (TSftTS for short). It creates a mock implementation of
MyInterface
which is guaranteed to adhere to the contract, and it runsRunStandardTests
on that, which should pass.However, I also need to make sure the test suite fails when it's supposed to fail, by instrumenting my mock implementation of
MyInterface
to break the contract in various ways. But I can't run this logic against the*testing.T
that is passed into the TSftTS, because even though I would be able to detect the failure witht.Failed()
, it would still cause the TSftTS itself to fail which is the opposite of what I want.2. The previously proposed solution
Change
RunStandardTests
to take atesting.TB
instead of a*testing.T
. Create a mock implementation oftesting.B
which records failures, run the test suite against the deliberately-bad component with this mock implementation, and verify its state afterward.3. My new concern
There are many tests in this test suite. So, I've used
Run
to produce nicely organized results:Unfortunately, for obvious reasons the
testing.TB
interface does not have aRun(string, testing.TB)
method.Possible solutions
Preferred
What would be nicest from my point of view is to— as the original issue reporter requested— give me a way to create a
testing.T
that is not coupled to the current test environment, so if it fails, it sets its ownFailed()
state to true but does not cause the parent test to fail.Workaround
One workaround would be to define yet another interface.
That ought to work, but it's ungainly. It also means that if I have any test helpers that take a
*testing.T
, which are also used by other test code that uses*testing.T
, I would have to create new versions of them that takeMyTesting
instead.(I'm using go1.13.7. The rest of my environment details aren't relevant)
The text was updated successfully, but these errors were encountered: