Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please implement marking ignored test cases with yellow sign rather than red for all Test Result providers (not just MSTest) #47

Closed
grzesiek-galezowski opened this issue Jul 25, 2013 · 12 comments
Assignees
Labels
Milestone

Comments

@grzesiek-galezowski
Copy link

Hi,

Currently, pickles cannot distinguish between ignored nunit tests and failed ones on HTML report, marking both with red mark.

It can be useful to distinguish, because ignored scenarios are usually the ones that we put in the tool, but didn't yet automate and we don't want to confuse them with already automated scenarios that fail (e.g. regressions).

I can see that, e.g. in case of backgrounds, a yellow mark is used. Can't we have the same in case of ignored scenarios?

@dirkrombauts
Copy link
Member

Hi,

Thanks for sending this issue. If you see the yellow mark for backgrounds but not for individual tests, then I think we have a bug. Could you please attach a test result file that demonstrates the problem? Thanks

@grzesiek-galezowski
Copy link
Author

Hi, thanks for quick response.

I took a look at the nunit report and found that nunit marks the tests where no steps are generated yet as inconclusive rather than ignored:

What deceived me was my Mighty Moose test runner which wrote for me the line:
Type: Test Ignored
Message: Ignored->(Nunit) xyz.AddTwoNumbers

I can see this on a hello world application, so I guess the current behavior is by design. Given this, is there a chance to treat inconclusive tests differently than failed ones? An alternative would be for me to mark each scenario with @ignore attribute until I start automating them, but marking and unmarking such scenarios would disturb the flow somewhat...

@dirkrombauts
Copy link
Member

I had a brief look at the code, and I noticed that we properly test this scenario only for MsTest. I changed the title of the issue to reflect this bit of information.

@dirkrombauts
Copy link
Member

I will try to fix this issue, but I can't really guarantee a timeframe. If you need it fixed really fast, you can of course do it yourself and send a pull request ;-)

@grzesiek-galezowski
Copy link
Author

Hey, actually I can take a look. If you brief me a little on where to find the test for MsTest and which class/set of classes I'd need to look into, then I could try it out.

As I examined some of the files in the repository, the code looks clean, so hopefully the change will be reasonably easy.

@dirkrombauts
Copy link
Member

Sure, here are some pointers:

  • the logic for deciding the test result is in the GetResultFromElement method of class NUnitResults in Pickles/TestFrameworks
  • MsTestResults is in the same directory as NUnitResults
  • The tests for NUnitResults are in the WhenParsingNUnitResultsFile class in Pickles.Tests
  • you'll find the tests for MsTestResults in the same directory
  • results-example-nunit.xml contains the NUnit test result file that we use for testing. It doesn't contain a scenario for an inconclusive test so you'll have to add one.

Please let me know if there's anything else you need!

@grzesiek-galezowski
Copy link
Author

Thanks, I'll let you know when I have some results!

@grzesiek-galezowski
Copy link
Author

I took a look at the tests and I can't find a test for inconclusive tests in MsTest either. The file results-example-mstest does not contain inconclusive tests...

As I understand, inconclusive TestResults are those that are neither passed nor executed and the only test in MsTest parser suite that asserts on WasExecuted being false is the ThenCanReadBackgroundResultSuccessfully(), which covers the case of Background keyword, not an actual scenario. Is this correct?

I'm asking just to confirm that I am working with the same version of the code you mentioned.

Oh, and one more thing - can I get the sources somewhere from which the example results for tests were generated? Or should I just paste my test suite into the existing XML?

@dirkrombauts
Copy link
Member

Well, me and my big mouth ... I looked at the code and you're right - there's no test for ignored scenarios.

The main thing about an inconclusive test result is the fact that is wasn't executed. It does not matter whether the test passed or failed.

I don't think we have the sources that led to the example results. It should be possible to reconstruct the sources, though: the MSTest result file shows the steps and it looks like a small extension to the generic calculator example that SpecFlow creates when you add a feature file

@grzesiek-galezowski
Copy link
Author

How about I recreate the tests in a separate solution and commit it along with my fix when I finish it? Then if someone needs to add another case in the future, he'd just need to add a test to the project, run a test runner and copy over the new file.
I think it would help adding new features or fixing bugs in the future.

@dirkrombauts
Copy link
Member

Excellent idea! I setup a new repository for this purpose: https://github.com/picklesdoc/pickles-testresults Fork it and send pull request when you're ready.

@ghost ghost assigned dirkrombauts Jul 29, 2013
@dirkrombauts
Copy link
Member

Fixed by the pull request by @grzesiek-galezowski

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants