-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please implement marking ignored test cases with yellow sign rather than red for all Test Result providers (not just MSTest) #47
Comments
Hi, Thanks for sending this issue. If you see the yellow mark for backgrounds but not for individual tests, then I think we have a bug. Could you please attach a test result file that demonstrates the problem? Thanks |
Hi, thanks for quick response. I took a look at the nunit report and found that nunit marks the tests where no steps are generated yet as inconclusive rather than ignored: What deceived me was my Mighty Moose test runner which wrote for me the line: I can see this on a hello world application, so I guess the current behavior is by design. Given this, is there a chance to treat inconclusive tests differently than failed ones? An alternative would be for me to mark each scenario with @ignore attribute until I start automating them, but marking and unmarking such scenarios would disturb the flow somewhat... |
I had a brief look at the code, and I noticed that we properly test this scenario only for MsTest. I changed the title of the issue to reflect this bit of information. |
I will try to fix this issue, but I can't really guarantee a timeframe. If you need it fixed really fast, you can of course do it yourself and send a pull request ;-) |
Hey, actually I can take a look. If you brief me a little on where to find the test for MsTest and which class/set of classes I'd need to look into, then I could try it out. As I examined some of the files in the repository, the code looks clean, so hopefully the change will be reasonably easy. |
Sure, here are some pointers:
Please let me know if there's anything else you need! |
Thanks, I'll let you know when I have some results! |
I took a look at the tests and I can't find a test for inconclusive tests in MsTest either. The file results-example-mstest does not contain inconclusive tests... As I understand, inconclusive TestResults are those that are neither passed nor executed and the only test in MsTest parser suite that asserts on WasExecuted being false is the ThenCanReadBackgroundResultSuccessfully(), which covers the case of Background keyword, not an actual scenario. Is this correct? I'm asking just to confirm that I am working with the same version of the code you mentioned. Oh, and one more thing - can I get the sources somewhere from which the example results for tests were generated? Or should I just paste my test suite into the existing XML? |
Well, me and my big mouth ... I looked at the code and you're right - there's no test for ignored scenarios. The main thing about an inconclusive test result is the fact that is wasn't executed. It does not matter whether the test passed or failed. I don't think we have the sources that led to the example results. It should be possible to reconstruct the sources, though: the MSTest result file shows the steps and it looks like a small extension to the generic calculator example that SpecFlow creates when you add a feature file |
How about I recreate the tests in a separate solution and commit it along with my fix when I finish it? Then if someone needs to add another case in the future, he'd just need to add a test to the project, run a test runner and copy over the new file. |
Excellent idea! I setup a new repository for this purpose: https://github.com/picklesdoc/pickles-testresults Fork it and send pull request when you're ready. |
Fixed by the pull request by @grzesiek-galezowski |
Hi,
Currently, pickles cannot distinguish between ignored nunit tests and failed ones on HTML report, marking both with red mark.
It can be useful to distinguish, because ignored scenarios are usually the ones that we put in the tool, but didn't yet automate and we don't want to confuse them with already automated scenarios that fail (e.g. regressions).
I can see that, e.g. in case of backgrounds, a yellow mark is used. Can't we have the same in case of ignored scenarios?
The text was updated successfully, but these errors were encountered: