Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

skip suite and test without failing #332

Closed
ehartford opened this Issue · 59 comments
@ehartford

I would like some function like skipTest() and skipSuite() that would allow me to skip a test or suite without failing it.

If you are in suiteSetup() then you could skip the suite
if you are in test() then you could skip the rest of the suite or the test
if you are in setup() then you could skip the rest of the suite or the test
if you are in tearDown() then you could skip rest of the suite
if you are in suiteTearDown() then you could not skip either

@Almad

+1

@tj
Owner
tj commented

This issue has been inactive for over 1 month so I'm closing it. If you think it's still an issue re-open. - tjbot

@tj tj closed this
@Almad

AFAIK it is still an issue ,)

@darthdeus

This is definitely still an issue.

@ehartford

still issue please

@tj
Owner
tj commented

hmm i never even saw this issue. I'm not sure I get it, what would this be useful for? Is this fixed by xdescribe( and xit() ? (we could add similar to the tdd interface)

@ehartford

Many reasons to skip without fail.
1) wrong platform for test
2) test is disabled
3) a service test depends on is not responding
4) a required environment variable or parameter is not set

@tj
Owner
tj commented

well:

1) should use --grep
2) should not supply a callback (making it pending), or use xdescribe / xit
3) should mock these services if they're not readily available
4) not sure

@ehartford
@tj
Owner
tj commented

I'm open to the idea but I'm not sold on the idea of needing to conditionally skip tests. Even if that's the case --grep is great for that, or just separate those tests into other files and dont load them for some envs

@ehartford
@tj
Owner
tj commented

sure, but I would argue that it's not a very good test if you have to conditionally skip it. If some external service is not available then it should be mocked etc, or use an environment variable to toggle wether or not it's hitting the service or the mock

@ehartford
@ehartford
@ehartford

anyway apparently I am not expert enough so I will just stop talking and let the other +1's pipe in with their perspectives.

@tj
Owner
tj commented

nah man I dont mean any offense, I just nitpick because I'm trying to understand the use-cases better before committing to an API that I need to maintain that's all

@ehartford

I understand :-)

@tj
Owner
tj commented

plus there are several +1's in here so it definitely seems like something people want, I just need use-cases so I can figure out what the best way to approach this is

@Almad

@visionmedia My use case is this: I have integration tests that test that when I execute code snippet displayed on web page is doing the right thing.

This is done by executing code in subprocess. However, we have fair number of such snippets and for some of those, environment might not be installed (i.e. F# binary). I would like to mark those as skipped...

@tj
Owner
tj commented

@Almad but why not tag them with "f#" or similar, it('should do something f#', callback) and --grep conditionally. Or why not split these ones into other files and load those conditionally? that's effectively the same thing, just organized better

@Almad

@visionmedia I would prefer unified starting interface so that executor don't have to know which tests to select.

@tj
Owner
tj commented

I'll think about it, this just seems like a very weird way to handle it

@pedrosnk

We have a simple use case for it. We have a system which can act in two different modes, and many functions are disabled in one of these modes. The mode is toggled by an environment variable, and we'd like to skip all the tests for the affected functions based on the same environment variable.

@tj
Owner
tj commented

@pedrosnk that's mostly an organizational problem, you could for example use https://github.com/visionmedia/mocha/wiki/Tagging

@ehartford
@tj
Owner
tj commented

because every feature I add includes additional baggage that I have to maintain, no feature should be added without discussion unless it's clearly needed. This is definitely one I've never needed, and it can easily be bypassed using other existing features IMO

@geek

I have also bumped into this limitation. Here is another use case: a set of integration tests you want to skip when a service isn't running. Wrapping the describe in an if works, but it doesn't notify the dev running the tests that some things were skipped. Failing isn't an option because certain integration environments can't have these dependent services running. The ability to skip from within a before() is the best solution to this problem.

@tj
Owner
tj commented

grep is still a better solution for that IMO, though we're not currently displaying unmatched tests/suites as pending, we could though, or add a flag for that since it would be noisy. I still view this as a non-problem personally, it's like if you run make test-simple in node's source, you implicitly know you're not running everything I don't think it's a huge loss to require someone understands that

@beaugunderson

If you don't want to use --grep you can solve this with simple if statements (in the example below config is just a module specific to my code that handles loading configuration from config files). This method will only work if the checks you need to do can be done synchronously, at least in my testing (if you're reading config files using readFile instead of readFileSync and try this method you won't have a fun time, for example).

if (config.database.name === 'integration') {
  describe('integration tests', function () {
    it('should run if setup properly');
  });
}

You may be tempted to invert the if statement and instead return if you're not setup for the tests; it's actually a SyntaxError to return outside of a function though.

@geek

That is the solution I am currently using. However, I think there is value in letting the test runner know that tests are being skipped.

@seangarner

I'm taking a similar approach to @beaugunderson but inspired by @starsquare in #591 which does the same thing but maintains readability.

var describeIntegration = (process.env.INTEGRATION ? describe : describe.skip);

Now use describeIntegration as you would describe. I've only used it in a small library set but it's one I have mocks but some tests simply don't have much value when mocked so I prefer to turn them off entirely when not running the test against the real service.

I care about the skips because I'm using xunit-file with Jenkins and it results in cleaner reporting. I have 1 test job which can be triggered either from the client or server build job. If I trigger from the client build I run mocks, and if it's the server build then I run the integration tests. Saves me creating a bunch of build jobs in jenkins for each style of testing.

I reserve the right to change my mind, but for now this workaround is fine for me and I don't feel the need for it to be in the API. (just my 2c).

@oliamb

var describeIntegration = (process.env.INTEGRATION ? describe : describe.skip)

I think it won't work easily if the condition detection is asynchronous.

Another way of solving this issue would be to have a SkipError class that can be passed to the done callback. It should then be easy to consider a test as skipped in this case.

it('might be skipped', function(done) { done( new SkipError() ); });

@ehartford
@oliamb

I would be glad to develop and maintain this part if we can agree on an API.

@grampajoe

This is definitely a necessary feature with use cases that can't be easily handled with --grep.

For example, say I'm writing an app that supports multiple storage and caching backends. I want to be able to run the tests with my CI server using NPM's scripts, i.e. npm test.

With --grep, if I want to test in many combinations of environments, I need a command for every one of them, e.g. npm run-script test-mysql-memcached, npm run-script test-postgres-rediscache, npm run-script test-sqlite-memcached, and so on.

It could even get more granular than that. What if I want to test with an old version of Postgres that doesn't support certain features? You can see examples of things like that in Django's tests: https://github.com/django/django/blob/master/tests/introspection/tests.py#L87

If I'm able to skip tests dynamically, I can have a single npm test command for all environments and only the relevant tests will run.

The API for this could be super simple. How about a second optional parameter to it's callback, i.e.

it('should frobulate', function(done, skip) {
  if (frobulation_not_supported()) {
    return skip('No frobulation support for this environment.');
  }
  // ...
}

The advantage over just using done is giving feedback about why the test was skipped.

@park9140

At first I was like, "Whoa, this sounds like it could be totally handled by tagging", as @visionmedia said earlier https://github.com/visionmedia/mocha/wiki/Tagging. Alot of the use cases i could composite for skip could be handled by proper use of tagging and regular expression based greps.

However, recently I was using https://github.com/visionmedia/mocha/wiki/Shared-Behaviour, for a similar situation to @grampajoe where I had multiple data backends. I needed to run the same battery of tests for each backend and they just had slightly different setup steps. Out of the large battery of tests there was one test which could not be run against one of the database backends. I ended up using an if statement to not define that test in the shared battery.

Eventually the second backend added the support for that test to succeed and we added the test back. However, it would have been nice to get a skip warning while we were skipping that particular test, given the shared nature of the test code.

Looking back we could have just declared the test without a function in the skip case giving pending status.

Which gave me an idea.

function unless(skip,test){
  if(skip) { return }
  return test;
}

it('should frobulate', unless(frobulation_not_supported(), function() { 
  // ...
}));

Which has a couple benefits.

  • semantics: it reads nicely with should tests 'it should frobulate unless frobulation not supported'
  • extraneous paramters removed: if you don't need an async test the done parameter screws with the definition
  • utilizes existing functionality: because we are just utilizing the pending test functionality it describes the test as pending until you frobulation is supported.

It does not solve the issue of outputting additional messaging for why it was skipped however.

In order to solve that we could return a special object which the test function would interpret as skipped.

function unless(skip,test){
  if(skip) { return skip}
  return test;
}

function frobulation_not_supported() {
  if(can_frobulate) { return new Skip("however frobulation is not supported by this framework"); }
}

it('should frobulate', unless(frobulation_not_supported(), function() { 
  // ...
}));

Which assuming correct handling of the Skip object would output something like 'it should frobulate however frobulation is not supported by this framework'

If enough people are interested I can whip up a pull request for this as a solution.

@Almad

@park9140 +1 for that.

@ehartford
@grampajoe

@park9140 Maybe the Skip object could just be passed to done?

Wrapping it's callback in something like unless might add some complexity if determining whether to skip the test is an async operation. You'd end up making unless's first parameter a callback, and then passing a done-like callback to that, and then the logic inside unless would start to mirror what's already happening with it.

Passing a Skip exception to done would handle the async case, and give done a way to determine whether the test has been skipped or passed. I'm not familiar enough with mocha's internals to say whether that's more or less complex to implement, though.

@cmbuckley

I played with a couple of approaches, but I thought a tidy implementation would be add to the test's context:

it('should skip if it cannot run', function() {
  if (skipCondition) {
    this.skip();
  }

  // the rest of the test
});

Because the skip function is on the test's context, it is available for BDD/TDD interfaces, and both tests and suites can be skipped. Consider:

describe('module that needs a dependency to test', function() {
  before(function() {
    if (dependencyMissing) {
      this.skip();
    }
  });

  // it... etc
});

One part that feels a little ugly at the moment is that calling this.skip() in afterEach will skip all subsequent tests. It's asked for as part of the original comment, but it feels too counter-intuitive to leave as-is for me.

I'm also concerned about just using a pending property on the error, but it works as proof-of-concept and could be improved upon.

Comments welcome! I've opened a PR to get some initial feedback.

@sukima

Easy, already implemented:

describe("conditional suite", function() {
  this.pending = shouldThisBePending(); // true or false
  it("is a pending test conditionally", function() {
    throw "This test failed";
  });
});
@gabegorelick

Any movement on this?

@Almad

Good question!

Is there a new maintainer after TJ left node.js?

@Almad

Awesome, thanks! :)

@boneskull
Owner

There are several new maintainers now. @visionmedia would be helpful if we could get a list so we know who's who.

@jbnicolai
Collaborator

@Almad, @boneskull, I am indeed one of the new maintainers - as listed in the package.json maintainers field together with @markelog and @travisjeffery. You can also see the list of maintainers on npmjs.org/package/mocha.

I have, however, only just taken this on and promised not to lightly implement large API changes - so I hope you can understand why I'm hesitant about adding this.

On the other hand, it's clear that this is a feature many people seem to have a clear want -if not need- for. As @travisjeffery said: alright, i see. seems strange but lots of people want it.

I will have a closer look at the pull requests that were created before I was maintaing this project, and would like to hear the opinion of my fellow maintainers.

@jbnicolai jbnicolai reopened this
@boneskull
Owner

@jbnicolai Hmm, @visionmedia gave me push access, so I don't know what that means exactly for "maintainer" status. Anyway, I have push access, so Hi. I hope to help w/ cleaning up this backlog of issues.

@boneskull
Owner

You can see the list of collaborators to this project via:

curl -i https://api.github.com/repos/visionmedia/mocha/collaborators

I don't know any way else to do it.

@travisjeffery

ya we have about a gazillion "maintainers" now. what matters is that mocha's going to be continued to be developed.

@jbnicolai
Collaborator

Haha, hi @boneskull, and welcome :smile:. Thanks for the cURL command to list people with push access, that's quite handy.

I suggest adding everyone with push access to the package.json for clarity, and as @travisjeffery said - the important thing is that we can now work through the backlog of issues :)

@boneskull boneskull referenced this issue from a commit in boneskull/mocha
@boneskull boneskull updating package.json to reflect current maintainers
as mentioned in #332
913964d
@okv

+1 for that, @starsquare solution looks great

@boneskull boneskull added the Feature label
@weilu weilu referenced this issue in common-blockchain/common-blockchain
Closed

Option to skip "tooManyAddresses"? #3

@dasilvacontin
Collaborator

Closed by #946.

@ToshioMagic

I'm still confused about this. I'm doing an API get, getting a 404, and then calling this.skip(). But when i call this.skip() I get an uncaught reference error to "this."

it looks like:

    beforeEach(function(done) {
        // blahblahblah
        this.skip();
        done();
    });

Also, I've never used github before so this may be in the wrong place....

@cmbuckley

@ToshioMagic This made it into the 2.2.0 release of mocha; what version are you using?

@cmbuckley

@ToshioMagic sounds like you're dealing with a separate problem, As the this.skip() functionality is available in your version. You may find more help on Gitter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.