-
-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Should be able to assert an "Inconclusive" (Pending) state for a unit test #395
Comments
|
-Pending doesn't do what I want. When I use -Pending, it just skips the test completely. I have no way to go into a test, perform some checks, and then explicitly say "this isn't a pass or fail but pending". |
Pending means that the test is work in progress. It's only purpose is to enable you to mark one or more tests as unfinished. Typically when the test is empty or you need to postpone working on the current scenario to fix some other issues.
Could you post an example test where making it inconclusive would be useful?
|
Example: I have a test that requires a debug build of a cmdlet. If I'm using a retail build a pass isn't correct nor is a fail. Sent from my phone. Expect brevity and typos. On Jul 25, 2015, at 11:40, Jakub Jare? <notifications@github.commailto:notifications@github.com> wrote: Pending means that the test is work in progress. It's only purpose is to enable you to mark one or more tests as unfinished. Typically when the test is empty or you need to postpone working on the current scenario to fix some other issues. Could you post an example test where making it inconclusive would be useful? Reply to this email directly or view it on GitHubhttps://github.com//issues/395#issuecomment-124871354. |
This seems like a too much complexity. You will end up with test runs where some of the tests are inconclusive for one reason and some are inconclusive for some other reason. Better approach would be to run the test only for the debug build and not run it for other builds. If you move your debug tests into separate test file you can do that pretty easily even in the current version of Pester. |
I disagree. Nunit and mstest both have this concept and it's perfectly understandable there. Not sure why pester can't have parity.
|
It's not difficult to implement. What do you think the command name should be in Pester? |
I was thinking over possible implementations and something like this seems pretty reasonable. Set-TestInconclusive or Set-TestResultInconclusive is probably reasonable. |
As far as implementation, I'm thinking that the new command will just throw a particular exception / ErrorRecord, and the catch block that's already in the |
Just ran into this. My use case is testing some code that calls into the windows failover clustering cmdlets. However, we are testing our code, not the failover clustering cmdlets, which may not be installed on all developers computers. It would be nice to do:
We have worked around it (for now) by just creating stub functions that we then |
@matt-richardson What would be that for? Your code depends on the modules. So if the module is missing your code should fail, and so your tests should fail as well. Making such test hides the fact that your code would fail in production and more importantly it introduces noise to the TDD cycle. Personally I think a better approach would be to split unit and integration tests, and simply run integration tests only on systems with all the dependencies installed. On the rest of the systems you can still test your logic by Mocking the dependencies, as you are doing now. If you are interested I recently had a discussion about mocking missing SharePoint modules here and here, where you can find a small function to generate the Module stubs for you. |
Why not recognize this is desired functionality for some developers? Microsoft and nUnit clearly does as they have this "inconclusive" concept built into their test harnesses. |
@nohwnd I subscribe to the "someone should be able to get the code from source control, and it will build". I dont want people to have to install dependencies to make my script tests work. I also use chef to do provisioning, so I know that a given box that I'm going to deploy onto is in a given state. However, I am not willing to make all developer boxes look like all production boxes in terms of dependencies (that are not related to anything most developers are actually working on). I also dont want to have separate sets of tests - the stuff I'm testing is relatively simple, and I dont want to complicate it. In my scenario, having an inconclusive option would work well for me. I dont expect that everyone is going to use it, nor should they have to. If other people want to handle it other ways, that's cool. But as @theficus has said, some people would like this functionality. Another point in its favour is that other frameworks have obviously seen the need for this, and people are using it there (which makes people want it here). A final point is that it is a small and quick addition. |
@matt-richardson @theficus Okay, I give up. :) Are you going to create a PullRequest? Or should we add the functionality? |
I believe that @dlwyatt is already on the case - I had a brief chat to him about it yesterday. |
Looks like in our NUnit exports, we're already using "Inconclusive" status for our Pending tests. Not a huge deal, but there will be some overlap there if you have both types of tests in a Pester suite. Should we just rename -Pending to -Inconclusive, or have them mean the same thing behind the scenes? |
@dlwyatt Rename |
Ahh - looks like this can be closed now that we've got Set-TestInconclusive? Thanks @dlwyatt! |
Yep. Odd that it didn't auto-close; GitHub usually does that when you merge a PR that references an issue. |
I've noticed that if you run a test without any body, it appears to return with a pending state. There doesn't appear to be any way to force a test with a body to exit with this state.
In my case I have some tests that are dependent on a specific configuration or binary type. If I run all tests, returning a "pass" result for these type of tests isn't really correct because the test didn't do anything. Returning a "fail" result isn't correct either because the test didn't actually fail. Either one of these binary pass/fail results is incorrect and just confuses the test output because it's not entirely accurate.
MSTest has a concept for this with Assert.Inconclusive. This works around the problem nicely by being able to assert that your test ran but didn't pass or fail.
It would be great if Pester had a similar concept to allow you to exit a test in this inconclusive state.
The text was updated successfully, but these errors were encountered: