New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lots of warnings logged when filter matches no tests 'cat=BAZ' #929
Comments
I'll get this subdued for the release version. |
@runehalfdan Just published 4.2.0-beta.2 with the fix. Are your tests split over multiple assemblies? Please be aware of the AssemblySelectLimit. If you have more than 2000 tests per assembly, your filters will be turned off, except those explicit and ignored. This applies when called from VS and from command line with a list of tests- If this causes problems, just raise that level way up. This was done to increase performance. If you just use categories, it will not be stopped. Your mention of test speed is very interesting, would appreciate feedback on that when you check beta.2 . |
@OsirisTerje Yes, I don't think any individual assembly has more than 2000 tests. Would it be logged if limit is exceeded? So, running with at category-filter that only would match ~10 tests out of ~3000 now take over 7 minutes, vs. 30 seconds before with Legacy discoverymode. The specific filters are Could it be that scanned tests not matching filter isn't cached? We make quite heavy use of TestCaseSource, TestFixtureSource and TestFixture(Type foo) for generic tests The annoying logging of |
@runehalfdan Thanks for the info. I'll try to replicate your scenario.
It might be the size of the filter, a category filter will in this case be replaced with a test list, but to be sure I'll need to replicate this behavior. |
@runehalfdan I tested the different scenarios with the repro code an earlier issue #497, the code is here: https://github.com/nunit/nunit3-vs-adapter.issues/tree/master/Issue497 , and I could not reproduce any slowness here. I tried with both positive and negative filters, and using both the --filter command and the NUnit.Where, and got the same results. Can you verify the repro, and see if there is something else you have that triggers this? You can also add to the same repro if you like, since it generates 20000 tests. As you can see, we have had this issue earlier, so would not rule out that there is some scenario we haven't found. |
@runehalfdan I have uploaded a new beta, https://www.nuget.org/packages/NUnit3TestAdapter/4.2.0-beta.3 , which contains a bit more logging. dotnet test Issue497_Filtering.csproj --filter=TestCategory!=Slow -v:n -l "Console;Verbosity=normal" -- NUnit.Verbosity=5 > log.txt You can change the filters to NUnit filters if you like, just add it after the Verbosity statement. The next section is near the bottom of the file: If you can grab out this information in your case, I would assume it would point to where the issue might be. And then it should be possible to create some sort of repro for it. The important thing to get out the information is to set the NUnit verbosity to 5, and then the logger setting to make it come out. You can change how you want to filter anyway you like, including using a runsettings file, if you prefer. |
@OsirisTerje I updated to Beta 3, set verbosity to 5, and ran logs both with and without legacy discovery mode (I'd rather not upload the logs here, but can shared in mail if you want)
This is exactly the same with and without legacy discovery mode
Neither modes report Total Test/Total time (since no tests were executed) Time Elapsed 00:00:00.99 for legacy mode, vs 00:00:36.70 for 'normal'. (I'll try to look into the Issue497 tests to see if I can reproduce) |
The second block is the execution part, right? The filter there has been cancelled out, which might mean that the AssemblySelectLimit has been hit, but I am confused that when discovery doesn't hit anyone, then execution should not run at all. My mail is in my profile, so would be interesting to see . |
I've played around with the nunit3-vs-adapter.issues/Issue497_Filtering projects now. Running with NUnit.Where=cat == FOO: Running with NUnit.Where=cat == BAZ: This could of course be logging tripping up the timings; there are ~60000 entries with entries 'Test 0-21503 not found in cache' |
The second block is the execution block. The test-assembly contains 84 tests (as reported by adding -t to dotnet test), so should be no way near the assembly select limit. |
Do you see a line saying About your results over, I get the same, but for the cat==BAZ I also see no tests being found, as expected, but then the 1) The execution should not run either - there is no point in that, and 2) an empty filter I being used. I think this actually may be the bug here. I enabled the dumpfile, and there was a ton of tests being run, which is then subsequentially filtered off by the adapter, so it never reaches the testhost. You can see this by adding Thanks a ton! |
Good catch. I managed to reproduce by making a tests 'crash' by doing Environment.Exit(1);, , even when said test should have been filtered out |
@runehalfdan Published https://www.nuget.org/packages/NUnit3TestAdapter/4.2.0-beta.4 now. Should fix the latest findings. |
@OsirisTerje I have verified that have test-filters that match none is fast again (and don't silently execute them) |
(This was found when testing issues in #912, and can be reproduced with test-code provided in #874. Registering as new since I suspect it is unrelated to those issues)
Running this, which matches no tests, generates warnings (or, yellow output, which I assume are warnings):
dotnet test TestProject1.sln -v:q --logger="Console;verbosity=minimal" -- NUnit.Where="cat=BAZ"
There are three logged warning pr test, so get nine warnings of this form:
Test 0-1001 not found in cache
This logging slows things down a LOT, even when verbosity is 'quiet'. A test-run that previously took about 30secs now take more than 5 mins.
We run each test project isolated; a .proj-file runs target 'vstest' in each test-project, passing in filter on command-line
(For background info, we have about 13000 tests in total now, in a codebase going back in time. So we have traditionally used categories to select which tests to run when)
The text was updated successfully, but these errors were encountered: