Skip to content

Tests should pass if test case source provides 0 test cases #1933

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jnm2 opened this issue Dec 20, 2016 · 25 comments
Closed

Tests should pass if test case source provides 0 test cases #1933

jnm2 opened this issue Dec 20, 2016 · 25 comments

Comments

@jnm2
Copy link
Contributor

jnm2 commented Dec 20, 2016

I'm in a situation where I need to run a set of integration tests sets against multiple databases. It's been extremely helpful to use TestCaseSource to give separate test cases for each object in certain database tables. In fact, this is what motivated the move to NUnit in the first place.

The only issue is that it's perfectly valid for tables to be empty. In that scenario, I would expect that no test cases would exist or be run. Surprisingly, and problematically, even though I'm returning an empty IEnumerable<TestCaseData> from the test case source, NUnit still attempts to run the test anyway as a single case with no arguments. And of course the test fails, since the arguments are necessary.

Is this helpful behavior or a bug? If this is helpful behavior to some people, can I globally configure NUnit to always consider 0 test cases as 0 test cases which matches my intuition and needs?

@CharliePoole
Copy link
Member

If memory serves (I can check the code tomorrow) NUnit is actually creating a fake test so that there will be a place to report the error back to you. This may not be the best way to handle the problem.

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 21, 2016

@CharliePoole @nunit/contributors I added failing tests for the expected behavior. Just double checking that this is the correct goal. jnm2@2a7e5a2

@CharliePoole
Copy link
Member

@nunit/core-team @nunit/framework-team We need to agree whether this issue is a bug or just how NUnit is intended to work. @jnm2 has already created a simple PR to fix it if we want it fixed.

I see no reason to fail for zero test cases supplied except possibly in the case of theories. What do others think?

@oznetmaster
Copy link
Contributor

I see no reason to fail for zero tests supplied in any case. It should be a noop.

@CharliePoole
Copy link
Member

In the case of theories, which are not dealt with in the PR, we define passing to mean at least one test case passed all assumptions and all of those that passed the assumptions also passed the assertions. It's a higher standard than normal tests. It seems to me that not finding any data at all is pretty much the same as finding data where none of it qualifies for use under the stated assumptions.

I think theories can take care of themselves here, just as they normally do with their altered definition of success. We should add a test to make sure we don't break that.

@CharliePoole
Copy link
Member

A while back we had a discussion about whether fixtures with no tests should fail and decided that was not a cause for failure. I think that parameterized method suites should be treated the same way.

@ChrisMaddock
Copy link
Member

I'd agree that zero test cases should not fail.

I think this issue is similar: #1856 I think @jnm2's approach of making it pass by default is a better solution than a duplicate attribute, however.

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

I think theories can take care of themselves here, just as they normally do with their altered definition of success. We should add a test to make sure we don't break that.

We already have TheoryWithNoDatapointsIsNotRunnable and it is doing its job, as I just found out.

@CharliePoole
Copy link
Member

OK, that's as I had hoped. My only quibble is with the Any statement used in the query. Since the code above that point just figured out how many builders there are and iterated through them, it seems silly to waste a Linq query on figuring it out all over again. Does testing builders.Count do the job by itself as predicted?

@CharliePoole
Copy link
Member

@rprouse I'd like to be sure that you agree with us on the behavior before OKing the merge...

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

Does testing builders.Count do the job by itself as predicted?

No. There are two problems with it.

  1. The check must be before this line: https://github.com/jnm2/nunit/blob/56283ed961bdb9f685864408208bd121c3132d2b/src/NUnitFramework/framework/Internal/Builders/DefaultTestCaseBuilder.cs#L140 - easy fix, included in this commit: eb4c6f7

  2. It causes TheoryWithNoDatapointsIsNotRunnable and UnsupportedNullableTypeArgumentWithNoDatapointsAreNotRunnable to fail.
    I changed the tests to show the real problem in this commit: b5c3e36

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

To deal with the two failing theory tests, we have two options:

  1. Rely on theories with zero cases to not be turned into suites and to fail due to missing arguments. This is what has been done in the past. For this we must detect TheoryAttribute in the builders collection (or introduce bool ITestBuilder.FailIfZeroCases or similar which I think is superior).

  2. Create suites of all ITestBuilder tests, including theories with zero cases, and implement explicit logic to fail theories with zero test cases. I think this is much better SoC. We can have a helpful error message in this case too, rather than "all test cases were inconclusive" "arguments not provided." This allows us to keep the builder.Count > 0 logic.

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

Correction. I'm not longer convinced that leaving zero-case theories as suites is the best experience. Probably better to turn them into single tests as they always have been. I just wish a better message was available than "arguments not provided," such as "no test cases were conclusive."

I'm on the fence. Opinions?

@immeraufdemhund
Copy link

I've never used a theory so I have no input.

@CharliePoole
Copy link
Member

"All test cases were inconclusive" is a Theory-specific failure (not error) message that we don't want to lose. It should only occur when there were test-cases and all of them failed there assumptions. Theories are suites. They are defined to fail when any test case fails or all cases are inconclusive. The message should never occur for tests that are not theories. If it does it's a bug, although maybe a different one from this.

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

The message should never occur for tests that are not theories. If it does it's a bug, although maybe a different one from this.

I agree. I did not mean to suggest that this message would occur for non-theory tests. Edited.

@CharliePoole
Copy link
Member

@jnm2 Theory is an example of a type of test that has it's own rules for failure and success. Ideally, those rules should always be encapsulated in the attribute itself. Any time we are using the Type of a specific attribute to make a decision in the framework, we should consider it a smell. We may be violating that encapsulation. This seems to be such a case. 😄

OTOH, the code we are looking at is responsible for building single test cases and suites of parameterized test cases (which includes Theory). It should result in a non-runnable test case or suite iff it is unable to build a test that will run. Currently, we do that indirectly by letting either of the two lower-level methods we call generate a non-runnable test depending on whether or not the right number of arguments are found. It seems as if allowing a parameterized test with no data to pass might be breaking this strategy so that we need a new one.

Just thinking out loud. I'm going to look more closely at the code before continuing.

@CharliePoole
Copy link
Member

@jnm2 You do seem to find those issues that look simple but have depths. 😄

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

@CharliePoole My first idea was, have TheoryAttribute override BuildFrom and if there are zero cases, set the suite to NotRunnable with SkipReason "There are no test cases." I'm liking it. The existing tests pass.

@jnm2 You do seem to find those issues that look simple but have depths. 😄

This thought has come to mind not infrequently. What is it about me that my workflow usually exercises all the paths less traveled no matter what the library is? :-p
But in the end everyone wins here. =)

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

Ooh, and now I'm up against the fact that treating[Theory] with no parameters as [Test] is implemented as a side-effect of this tests.Count > 0 check as well.

I dunno, is it me? O.o

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

Okay, the TheoryAttribute needs a way to cause the parameterized method suite (which is built after the fact) to be not runnable.

  • I'd hate to change the ITestBuilder.BuildFrom return type
  • I'd hate to use an exception for this.
  • I'd hate to have a convention that a single child test that is not runnable is elevated to replace the parameterized method suite

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

An exception is often appropriate though. See CombiningStrategyAttribute.BuildFrom.

If there is some problem constructing the suite (which for theories with zero cases, there is), it coudl throw an exception and have the exception message represented on the empty suite.

@jnm2
Copy link
Contributor Author

jnm2 commented Dec 22, 2016

I'm expecting some good critiques, but I just committed what I think is the basis of a good solution.

@rprouse
Copy link
Member

rprouse commented Dec 22, 2016

@rprouse I'd like to be sure that you agree with us on the behavior before OKing the merge...

I also think zero test cases shouldn't fail the tests. I will throw one option out there though, should/can we use the new warnings for this? That way it will not fail the test, but people who thought they have data will be warned?

@CharliePoole
Copy link
Member

@jnm2 I'll re-review.

@rprouse I like the concept. It would change the run result to warning in all our runners that understand warnings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants