Skip to content

Implement Discovery-time filtering for NUnitLite #2876

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
CharliePoole opened this issue May 29, 2018 · 32 comments · Fixed by #2878
Closed

Implement Discovery-time filtering for NUnitLite #2876

CharliePoole opened this issue May 29, 2018 · 32 comments · Fixed by #2878

Comments

@CharliePoole
Copy link
Member

Issue #2867 asks for us to avoid running TestCaseSource code when the generated tests would be excluded by the filter that has been provided.

As a start on this, this issue will make changes to the framework and implement discover-time filtering that is available when running NUnitLite. This is a handy way to work since the framework and nunitlite are all part of the same repository and solution.

@CharliePoole
Copy link
Member Author

We should consider how various TSL syntax elements may map (or not) to a pre-filter. Some can be handled directly, while others can not or present additional problems and are marked as not supported.

TSL Element Supported? Notes
test Partial Only through the Class name, which is not changeable by the user
name No In theory, this could be supported for classes only, but is unsupported for now
class Yes
namespace Yes
method Yes
cat No Requires examining attributes, which may be custom
id No Ids are only assigned as the tests are discovered and are non-deterministic
and Yes
or Yes
not Yes

@CharliePoole
Copy link
Member Author

Here's a question for @nunit/framework-team...

We have a defined FrameworkPackageSettings.LOAD, which we have never exposed to users. It is used in DefaultTestAssemblyBuilder to filter the candidate test fixture classes, allowing us to do a partial load of tests in a particular list of fixture classes or namespaces.

This code carried over from NUnit V2, where it was exposed to users via the command-line --fixture option. Up to now, it was a sort of hidden feature of NUnit 3, which could only be used if you created your own runner and added a list of names to the settings dictionary passed to the framework on loading tests.

I'm currently using it in the first commit for this issue, but I don't think I will want to continue. I'd prefer that pre-filters be represented in a more general way, similar to test filters, since that leaves much more room for expanding the capability in the direction we have discussed.

Question for the team: do you agree with the idea of "breaking" this capability in further commits to the PR that accompanies this issue?

@ChrisMaddock
Copy link
Member

Yep. 🙂

@ChrisMaddock
Copy link
Member

Although, actually, is there any reason this change can't be additive, and we can't just obsolete load?

@CharliePoole
Copy link
Member Author

We could, of course, but we tend to accumulate lots of old stuff. The big downside is having to keep it working along with the new approach as we did for run parameters.

@CharliePoole
Copy link
Member Author

Also, the only thing we can deprecate is the property definition itself. That won't stop anyone who is just putting an entry into the settings dictionary with "LOAD" as the key.

@jnm2
Copy link
Contributor

jnm2 commented May 31, 2018

I am happy either way, slightly more happy with obsoletion to end up cleaner.

@CharliePoole
Copy link
Member Author

I guess we have to first decide about the points I initially raised in the description. In particular, do we want the --prefilter option (or another name), working similar to the --test option. Choices...

  1. No option - just do it all behind the scenes automatically.

  2. Option like --test taking a list of strings. In that case we could just keep using the existing implementation.

  3. Option like --where taking a simplified bit of TSL. The existing implementation would become unsupported.

  4. Both 2 and 3. We could then deprecate the old implementation while continuing to support it. We would have to allow for users who make use of both in the same command-line, which is an extra bit of work.

@CharliePoole
Copy link
Member Author

OK, after experimenting a bit, I have a plan. Appreciate review from @nunit/framework-and-engine-team and anyone else. Here goes...

  1. PreFiltering will be ultra-simple, using a list of names like the --test option. I considered doing an actual filter object and a variation of test selection language, but the payback doesn't seem that great.

  2. In nunitlite, you will be able to use the --prefilter option, which works similar to the --test option. The arguments are all strings and they may represent namespaces, fixtures or methods. That's as refined as it gets. You can't specify arguments for example.

  3. If you don't specify --prefilter but specify a list of tests via --test, nunitlite will automatically create prefilters from the tests as best it can. It will keep the test filter as well, in case you specified something narrower (like arguments) in your test selection.

  4. If you don't specify either --prefilter or --test, NUnitlite will look at your --where option and try to create a pre-filter. Note that this is NYI in the current PR (as of 6/1/2018). See the PR for the latest status.

  5. My thinking is that after merging the PR, someone should add the nunitlite portions to the engine and console runner, so that the two work consistently.

  6. Most users will probably not use the --prefilter option and will just get better performance automatically. Advanced users may decide to make up for any deficiencies in the automatically generated pre-filtering by specifying them manually.

Comments?

@jnm2
Copy link
Contributor

jnm2 commented Jun 9, 2018

This sounds good to me. I don't have a need to use --prefilter or --tests from the command line, so for me it's more about what happens when I run or debug a selection using Test Explorer or ReSharper.

@CharliePoole
Copy link
Member Author

Right, but testing with the NUnitLite command-line is the only way we have to know if it's working correctly. Once it is, you or somebody on the adapter team could hook into that functionality.

@masaeedu
Copy link

Can we preserve some mechanism by which all tests can be lazily discovered regardless of how things are started up (see #2914), or provide some property on TestExecutionContext that indicates "this session was started with filtering", so that code that expects all tests to have been discovered can bail?

@jnm2 jnm2 changed the title Implement DIscovery-time filtering for NUnitLite Implement Discovery-time filtering for NUnitLite Jun 28, 2018
@CharliePoole
Copy link
Member Author

@masaeedu Are you asking for the current issue / PR to be changed? It seems more like a separate new feature to me.

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

@CharliePoole I brought this issue to his attention since this feature we're adding would invalidate his assumptions in his solution for #2914. If we add prefiltering, it looks like we'll leave this use case without a reliable solution.

@CharliePoole
Copy link
Member Author

@jnm2 I understand, but I'm asking him what he's asking for: i.e. if he wants us to abandon the current PR because it breaks his assumptions. Personally, I wouldn't want to do that, but it's really up to the team.

In the answer to #2914, we gave him an adhoc solution that worked at that moment, but hopefully, it was clear that it's adhoc and implementation-dependent. In the long term, any feature should work on all the tests or a subset if the user selects a subset. A feature that always looks at all the tests, even in the face of a --where clause seems to me to be a mistake in design because it ignores what the user typed at the command-line.

Basically, however, what I'm after is to decide whether to continue working on this issue.

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

@CharliePoole As far as I can see, both he and I are asking to discuss the opposite of abandoning this PR: an additional API which builds on this PR. It's not a separate new feature because it makes no sense unless integrated with this one.

It's really confusing to me that you'd bring up abandoning the PR. If we discuss the snapshot testing user case, we might either:

  • decide that the use case in this form is flawed, in which case we continue with this PR
  • or, decide that there needs to be an escape hatch like TestContext.IsPrefiltered or whatever might be appropriate, in which case we continue with this PR and add the flag.

@masaeedu
Copy link

@CharliePoole From what I understand the changes here would invalidate the approach suggested in #2914. When the user starts the runner with filtering, there would no longer be a way to list all testcases in the assembly. Since I'm writing snapshot tests on top of nunit, I need either:

  • The ability to reliably list all testcases in the assembly
  • or the ability to recognize when there is a discrepancy between the tests available in the context and the tests present in the assembly (so I can bail on pruning the snapshot file)

I'm not very familiar with the codebase, so I don't know whether either approach requires that the PR be abandoned. If I had to hazard a guess, I'd say the second approach would be possible to build on top of this PR, so it seems that would be the ideal solution.

The first approach would be more complicated to implement in a way that satisfies both performance requirements and the use case, but should be possible if the tree of tests is lazily discovered (i.e. the .Tests property of each ITest is a lazy IEnumerable). If the lazy tree is traversed with filtering, you don't pay the cost of discovering the children of any tests that get filtered. At the same time, someone who wants to "pay for play" and force discovery of all tests can simply traverse the tree without any filtering.

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

Would it make sense for the pruning to be done by a utility which takes control of the NUnit engine parameters and prevents filtering from happening, sort of like the way code coverage tools work? Or is that too heavy-handed?

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

In other words pruning wouldn't happen in any runner except your own which uses NUnitLite to discover all tests and prune, and you'd run that pruning tool via .\test.ps1 -t prune?

@masaeedu
Copy link

masaeedu commented Jun 29, 2018

I'm not sure, it seems like there would be a large maintenance burden in terms of trying to keep up with all the standard nunit runner features if I had to write my own runner. Is it prohibitively difficult to just expose what filters (if any) were set in the TestExecutionContext in some standardized way?

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

@masaeedu If you used NUnitLite, you'd be writing a one-liner besides the code you'd write to do the pruning. You'd control the command-line args you pass (in-process) to NUnitLite, so you'd be able to prevent filtering.

It doesn't sound like it would be difficult to expose a TestContext.IsPrefiltered. The design of the API would be the most difficult thing, because its existence would imply that there's a similarly supported feature to obtain the discovery results. If we weren't sure we wanted to ever do the latter (and maybe we would), it casts doubt on the former.

@CharliePoole
Copy link
Member Author

@jnm2 If he isn't using NUnitLite, then the whole problem is a no-op, at least for the moment.

@masaeedu What are you using to run tests?

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

@CharliePoole Understood, but I'm planning to push as hard as possible on getting VSTest and ReSharper using the prefilter and think about the console.

@CharliePoole
Copy link
Member Author

@jnm2 (and also @masaeedu )

The solution I gave earlier was adhoc and implementation-dependent. I thought I made that clear. If I didn't, I apologize.

I labeled the issue as a question, so it would be clear that we were just talking about a way you could currently use NUnit, not making promises for the future.

At any point, somebody could come along and post a bug that complains about the very "feature" you are trying to make use of here: "I specified a filter, but all the tests were discovered."

@CharliePoole
Copy link
Member Author

@jnm2 Responding to your post "It's really confusing..."

I consider giving the test knowledge about how and why it is being run to be both unrelated to this issue and a bad idea. I don't really want to get into a big discussion of it with you, since I'm kind of half-way out of the team anyway. Therefore... the easiest thing may be to abandon the PR and let somebody else take over the branch.

I'd prefer not to do that because I prefer to finish stuff I start. But I prefer to finish it to the scope that I initially intended and not have other work piled onto it. As I said on the PR, I'll finish up according to your earlier comments. If the work is not acceptable or complete WRT this new requirement then you and the rest of the team will have to decide what to do with it.

There's no animus in this. I simply have to set some boundaries as to what I'm willing to work on, just as any member or contributor does.

@masaeedu
Copy link

masaeedu commented Jun 29, 2018

@CharliePoole I'm using dotnet test on Linux, but the majority of developers working on the codebase use the Visual Studio test runner (possibly some users also have Resharper and use the test runner from there). If this isn't specific enough, please let me know I can find whatever information you need.

Given this approach is adhoc and implementation dependent, could you please relabel the issue as a feature request and reopen it? It seems the suggested solution is not adequate or supported for "obtaining a list of tests (accounting for generated testcases)".

@CharliePoole
Copy link
Member Author

@masaeedu I'll let @jnm2 or one of the other folks who are sticking with the project decide whether to take this as a feature request and relabel it.

My two cents: It's not really a great idea to use running the tests as a way to explore what's there. We do have an option to explore the tests, and the default format produces a complete list of all test cases in XML. Is that useful to you or have I not completely understood your requirement?

@CharliePoole
Copy link
Member Author

@masaeedu Based on how you are running tests, this issue and PR are not going to impact you one way or the other. However, any follow-up that puts a similar change into the engine would impact all runners that use it. I don't know whether R# uses the engine.

@jnm2
Copy link
Contributor

jnm2 commented Jun 29, 2018

@CharliePoole Thanks for helping me understand, I appreciate the thought process. I was on the fence and thought it worthy of discussion by the team. (Not being aware of #2914, I had asked Asad to bring up the potential collision here so that he could receive (as I put it) more valuable opinions than mine. 😁)

I agree with the points you've made. Now I'm able to explain these things better to other users. Either way I can't wait to be able to take advantage of your PR! It'll make a real difference for me.

@masaeedu
Copy link

masaeedu commented Jun 29, 2018

@CharliePoole With snapshot testing, there's a certain amount of "it should just work" that's needed, because if there's 2 or 3 explicit steps the user needs to take, it's easy to forget one of them and end up committing nonsense to the snapshot file.

If there's some API to explore the tests and get XML data, I'd still want to run that before any tests are run, in order to guarantee that people don't end up adding garbage data while they're writing and refactoring their tests (whether in terms of test names or argument sets). Alternatively, I'd want to write nothing to the snapshot file until I know "this is a session in which things aren't being filtered".

Other test runners like Jest or AVA follow the same principle, so this isn't a peculiarity of my implementation. When you run jest -t "somefilter", it will still notify you of deleted tests that don't match the filter and offer to remove them.

@masaeedu
Copy link

I'll reiterate that I have no familiarity with the codebase or the contents of this PR, so which PR things are done in is all the same to me. Sounds like this PR specifically won't have any material impact on what I'm doing so perhaps this discussion is off topic here.

@CharliePoole
Copy link
Member Author

@masaeedu Just to wrap this up... lots of things get into how we subdivide work. In general, it works better to have the work in small packages so that we can treat it easily as small thing, accept or reject it as a whole and - not the least of concerns - so that it only contains the amount of work that some person feels like doing that day. 😈

At the moment, I'm personally very interested in how small a piece of work can be, while remaining useful to somebody.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants