Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add New Skip Feature #434

Closed
wants to merge 3 commits into from
Closed

Add New Skip Feature #434

wants to merge 3 commits into from

Conversation

englercj
Copy link

Why

I was missing a feature that would mark tests as "skipped" so that I can see tests that aren't implemented, or features lacking in that browser, or to temporarily skip a test with a reason so I could remember to come back to it.

How

The general implementation is to make your final result a 3-state variable instead of 2-state. Right now there is true for passing and false for failing. This adds a thirds state of null for skipped. Calling skip() or skipIf() will mark that test as "skipped" which will color it yellow, and add to the "skipped" counter (note that skipIf() has a test param and will only assert "skipped" if that param is truthy).

Use cases

Mocked Test

The most basic use case that I started with was me mocking out tests, and implementing random ones out of order. I didn't want to have "failing" tests for unimplemented, nor did I want to use expect(0) and have a passing test because I will forget to go back to it. So the most basic case was something like:

module('Utils');

test('#utilFunction', function() {
    skip();
});

which gives me output like:

tests 1

SkipIf based on Feature Detection

This was brought up in #189 that it would be nice if we could detect a browser feature, and conditionally skip the assertions depending on if it had it or not. Something like:

test("that the function will return the element passed by a selector", function() {
    skipIf(!document.querySelectorAll, "Browser doesn't support querySelectorAll. Implement test for alternative interface if we're going to support this browser.", function () {
        var result =document.querySelectorAll('#container a');
        ok(result);
    });
});

which on FF19 gives the result:

tests 2

but in IE7 gives the result:

tests 3

Final Notes

Obviously there a many reasons that skipping a test is preferred to it failing. I may have implemented it in a way that is not ideal, and if so please let me know and I will try to fix it because this is a feature I would really like to see land. The final thing I want to note is that if you have failed assertions in a test with a skip() call as well, the test is still marked as failed:

tests 4

If you have tests you know will fail if some condition isn't met, then that is what skipIf() is for. You can do the test and only run code inside if you need to, otherwise you get a skip assertion.

@JamesMGreene
Copy link
Member

As an occasional user of Mocha, I'm a big fan of this idea and have talked about implementing it before as well. IIRC, though, the other QUnit collaborators weren't big on the idea. 😕

Notes:

  • I'd like to see a different visual treatment than your images show.
  • If the other collaborators are opposed to this idea (as I recalled), then perhaps we can turn this into a plugin...?

Known Prior Art:

  • Mocha has "Pending Tests"
  • MSTest (.NET) has:
    • "Ignored/Skipped Tests" (tests that won't run at all, controlled by code attribute/annotation on the test method)
    • "Inconclusive Tests" (test ran but its status couldn't be determined; more similar to skipIf but are unfortunately still considered failures)

@jzaefferer
Copy link
Member

I'm sure we had more discussion about this before, though the only reference I can currently find is #326.

The skipIf has some charme, but isn't really needed. You can just wrap the test call in the condition. We do that for QUnit's own tests and never had a problem with that.

So while I appreciate the detailed pull request, a regular ticket to discuss the feature request would have saved a lot of time.

II'll leave this open for now, but I'm generally against adding this feature.

@englercj
Copy link
Author

@jzaefferer Adding a condition to the test calls prevents them from running, but doesn't tell you anything. I really miss the feature of being able to see that we skipped it visually and why.

@jzaefferer
Copy link
Member

So feature condition is one thing. Within QUnit we only need that when running tests in non-browser environments. I don't really care which tests are skipped there.

The other one is usually countered with: If you're going to implement the test in the next minutes, make it fail. If not, don't write the test. What's the usecase outside of that?

@englercj
Copy link
Author

I often stub out tests that I can come back to later. By later I may not mean minutes later, I may mean months. By which time I will have surely forgotten if I just commented out that test. Or if I just leave it as failing, my build has been marked as broken for months, when really it isn't broken, I just didn't finish the tests yet.

It is a system I got used to using on mocha, and really enjoyed because I could easy switch back and forth between my tests and code and not forget my place. Specifically with QUnit I have to have passing tests (which I will probably never look at again unless that feature changes later) or failing tests (which for this case will mark a working build as failed). I want a third state of pending or skipped that isn't passed, but isn't failed.

I don't feel that pass/fail fully describes all the states that a test could be in, that is how I usually work with testing which probably grew from using Mocha so much.

@JamesMGreene
Copy link
Member

Fully agree with @englercj on the usefulness of stubbing out a test list with pending tests: it's enjoyable for me alone but also very helpful in the OSS world to show test cases that are not proven yet (as a disclaimer if sorts), as well as for any inspiring contributors to have an easy place to jump in and help. I also much prefer how the test author indicates a pending test in Mocha over this PR's implementation: by simply not providing a test callback function, e.g.

test('normal test', function(assert) {
    assert.ok(true);
});
test('pending test');

@englercj
Copy link
Author

I can definitely get on board with that. Having it sample and mark a test as pending would be a better solution.

@FagnerMartinsBrack
Copy link

TL;DR
jUnit (Most used unit test framework for Java) has a similar feature with the annotation @Ignore too. This information enhances the examples @JamesMGreene wrote above.

@jzaefferer
Copy link
Member

I often stub out tests that I can come back to later. By later I may not mean minutes later, I may mean months.

Can you expand on that a little further? To me this sounds like you're using unit tests for issue/task tracking, which seems like a bad idea.

I appreciate the references to other frameworks, but feature parity is not a goal of QUnit, so that doesn't pull much weight.

@englercj
Copy link
Author

englercj commented Apr 5, 2013

I think @JamesMGreene said it best, having a pending test is just a useful feature for stubbing out which tests you want to write, but not having to write them right then. But still having a reference to come back to later and finish.

@FagnerMartinsBrack
Copy link

@jzaefferer

... To me this sounds like you're using unit tests for issue/task tracking ...

You will never create a task "create test for X feature when Y value is greater than Z".
You will create a task "create X feature":

  • When lacking some tests you will skip those you believe is needed because you could not do them right now.
  • Or (the most useful use case) you will create all the pending tests and go adding them according to the task completion, so you will not forget anything.

Let's say I need to create 5 tests regarding a task. I analyse and see which tests I should create.
I create 4 of them but after that I do not recall which one is missing.
If I had a pending tests feature I could mark all the tests I should create. Then I create the first, run everything, create the second, run everything... and so on.

Today we create a test without any assertion or something like ok(false, "create test for feature X when Y is greater than Z")

But that is not technically an error, but a reminder.
The programmer's brain could assimilate such error in the future as a reminder even if that is a true error.

A few times I have gone to similar situation where a pending test would fit better than a red one. But the argument

To me this sounds like you're using unit tests for issue/task tracking

is quite valid. All depends how much qunit is willing to help the programmer to test and control the software behavior.

@Famlam
Copy link

Famlam commented Apr 5, 2013

Sorry for spamming, but I thought it wouldn't harm to write down my opinion too, if I understood the above correctly.
A use case for showing an UI if an item is skipped (instead of wrapping the whole test in an if (anything) {} block) is for example when you have items that rely on XMLHttpRequest, which can't run when you run the tests in a file:/// environment*. You can now either let it error all the time (doesn't provide any value...) or skip it completely (without any UI). In case of the latter, you might (incorrectly) think all tests passed, if you're not paying attention enough. In case of the first, you'd get errors even if you're testing parts of your code that don't use XMLHttpRequest. A simple UI that just tells you a test was skipped would be the perfect way in between.

(*) Another example case can be that you're using web workers, which (only in Chrome) can't run in a file:/// environment.

@Krinkle
Copy link
Member

Krinkle commented Apr 5, 2013

My opinion on some of the mentioned use cases:

If you have tests for a features that doesn't exist yet (so an unconditional skip for tests that don't yet pass), the test should probably be in a feature branch instead, once the feature is implemented in the same branch, you merge it. No need to indiciate this within the test suite. As long as the tests are incomplete or complete but failing because the feature isn't there yet, that's good. Because if someone were to merge it too early, it's supposed to fail (not silently pass with a "skipped" message somewhere buried in the results). It also helps if your continuous integration environment supports testing branches, you'll be able to see which branches have work to do and work are ready to merge (at least from a test point of view).

If you need a reminder to write tests for something (so an unconditional skip for tests that don't exist yet), then file an issue in your issue tracker. Whether it is commented out or marked skip, I don't see how either is a workable solution to work as a reminder. Assuming the skipped tests should not affect the overal status of the suite as a whole, you'd only notice it if you look at the code or read through the results manually. Having an open issue (in a certain component, perhaps with label easy or test-todo) seems easier to find for (or give to) a new contributor, then having to run the tests and look for any skipped tests, which (given there are at least 3 reasons so far a test could be marked skipped) isn't a scalable workflow.

@Krinkle
Copy link
Member

Krinkle commented Apr 5, 2013

@FagnerMartinsBrack:

@jzaefferer:

... To me this sounds like you're using unit tests for issue/task tracking ...
You will never create a task "create test for X feature when Y value is greater than Z".
You will create a task "create X feature":

I disagree. If there's an important part of feature X not tested, I don't see why you wouldn't file an issue that a test should be created for it. Actually, if it is a regression test I'd argue the commit fixing it shouldn't been merged in the first place, but that's discipline.

Basically any test you know should be added should either be added when the bugfix/feature is merged or have an issue filed for it as a todo item.

@JamesMGreene
Copy link
Member

Long-story-short, it sounds like this one isn't going to get merged.

On the other hand, I do think it is possible to implement as a [very hacky] plugin right now via duck-punching QUnit.test that could be further refined into a much cleaner plugin once #422 and/or #378 are completed.

For example, the following would give you the ability to skip tests and add them to the UI (but not its "counts"... you could implement that yourself with a little more time invested):

QUnit.test = (function() {
  var realTest = QUnit.test,
      slicer = [].slice;
  return function() {
    if (arguments.length && (typeof arguments[arguments.length - 1] !== 'function') {
      // Add a "pending" test to the UI, check qunit.js core code for more details on
      //    what you need to mimic.
      // Also, add a "pending" count and/or add the pending test count to the total
      //    count... may want to wait until `QUnit.done` to change the counters, though.
    }
    else {
      realTest.apply(this, slicer.call(arguments, 0));
    }
  }
})();

@FagnerMartinsBrack
Copy link

@Krinkle

ou will never create a task "create test for X feature when Y value is greater than Z".
You will create a task "create X feature":

I disagree. If there's an important part of feature X not tested, I don't see why you wouldn't file an issue that a test should be created for it. Actually, if it is a regression test I'd argue the commit fixing it shouldn't been merged in the first place, but that's discipline.

The main ideia in my POV is not to merge a topic branch with the pending tests, but mark their need while developing. Let's say you have 5 tests and write 3. In the next day you go and finish the remaining, which are already marked.

Of course, you could mimic such situation failing in red those that are needed. But the main view (red stripe at the top) would return to you that tests are still failing, even though I haven't created any yet and those created are passing.

The main problem could be that you would be showing green success in the stripe at the top, so that could drive the user to think that everything is ok, when that is actually not. Visual inspection should be avoided indeed.
Maybe when there is skipped tests you could change the color to yellow for the stripe along with the tests skipped.

But the problem is that it goes against the OP proposed behavior and usage that is skipping the tests for browser lacking certain feature (or maybe a yellow stripe in this case is not so bad?).

@englercj
Copy link
Author

englercj commented Apr 5, 2013

👍 to @JamesMGreene and @FagnerMartinsBrack

@jzaefferer
Copy link
Member

Long-story-short, it sounds like this one isn't going to get merged.
On the other hand, I do think it is possible to implement as a [very hacky] plugin right now via duck-punching QUnit.test that could be further refined into a much cleaner plugin once #422 and/or #378 are completed.

Let's go with that. I don't want this in QUnit, but it makes sense to have a plugin that provides the feature.

@jzaefferer jzaefferer closed this Apr 8, 2013
@cyril-sf
Copy link

Sad to see that this hasn't been merged. My use case is this one: I have a single file containing my tests, some of them can't pass on some browsers because of the lack of a specific feature (web workers).

I have to wrap them in a condition, but if I make a mistake and wrap another test in that condition, I have no way to know that.

@FagnerMartinsBrack
Copy link

You have a clear point, this is really needed. But as @jzaefferer pointed out it would be nice to have a plugin for that (I am not sure how that stood)

@JamesMGreene
Copy link
Member

I still don't see the need for a conditional skipIf, personally. Use an if (or Modernizr/has.js/yepnope) to conditionally include your tests or give a logic branch inside your test body. If you're worried about mistakenly including too many tests under a if block, move them into their own JS (test) file: e.g. jquery.myPlugin_tests.js and jquery.myPlugin_withWebWorkers_tests.js.

I do plan to eventually implement a JamesMGreene/qunit-pending plugin but only to achieve functional parity with Mocha, i.e. test calls without a test body (callback) will result in a "pending" status.

@FagnerMartinsBrack
Copy link

test( "testing feature", function() {
    if (  unsupportedFeature() ) {
        ok( true, "Unsupported browser" );
        return;
    }
    expect(5);
    ...
});

You will never have more than one condition per test (otherwise yor unit is doing too much work). A single condition make it clear the support as the first statement in the operation and "expect()" always follows. At least this is how I do it.

If you forget the return it will complain.
If you wrap the expect() it will complain.
If you mistake the condition it will complain.
If you forget the message it will complain (no assertion)

As @JamesMGreene pointed out it really makes no sense, unless I am missing something.

@englercj
Copy link
Author

I think the difference is that your method will still mark it as pass/fail, but a skipped test shouldn't be pass or failed it should be "skipped" or "pending" to note the different state.

It isn't that there is no way to skip a test now, there just isn't a good way to know that it was skipped. Having the third state is what this is all about.

@cyril-sf
Copy link

I agree with @englercj. This is not the same.

Imagine we're back in time, and IE8 is the latest version. I have my test suite, I create a separate file that I skip for IE or use the trick to make the test pass.
IE9 is released, I run my test suite against it, everything is green, fine,
IE10 is released with support for web workers, everything is green again. Oh but wait, I have to know/remember that I have excluded some tests. Or I could see that I have some skipped tests and make sure that they pass.

It seems a little bit to be an edge case, but a skip tests doesn't have the same semantic than a passed test (or a not-run-at-all test).

@jzaefferer
Copy link
Member

Less time discussing, more time implementing a plugin, please!

@Potherca
Copy link

Such a shame this did not go through.

Carrying the name "QUnit" and not supporting xUnit style Incomplete and Skipped Tests seems like a bit of a dud.

Any news on the state of that plugin?

@englercj
Copy link
Author

@Potherca Not from me, I switched test runners instead, sorry!

@Potherca
Copy link

@englercj No worries, my current setup using this: https://gist.github.com/potherca/8579269 works fine for me an my team, for the time being.

@JamesMGreene
Copy link
Member

This is my interpretation (Mocha-style "pending" tests for QUnit): https://gist.github.com/JamesMGreene/8617691

I'll move it to JamesMGreene/qunit-pending when I get a chance.

leobalter added a commit to leobalter/qunit that referenced this pull request Sep 4, 2014
leobalter added a commit to leobalter/qunit that referenced this pull request Sep 4, 2014
leobalter added a commit to leobalter/qunit that referenced this pull request Sep 5, 2014
leobalter added a commit to leobalter/qunit that referenced this pull request Sep 5, 2014
Also logs skipped tests on testDone callback that are handled on the html reporter

Fixes qunitjs#637
Fixes qunitjs#434
leobalter added a commit to leobalter/qunit that referenced this pull request Sep 5, 2014
Also logs skipped tests on testDone callback that are handled on the html reporter

Fixes qunitjs#637
Fixes qunitjs#434
leobalter added a commit to leobalter/qunit that referenced this pull request Sep 8, 2014
Also logs skipped tests on testDone callback that are handled on
the html reporter

Fixes qunitjs#637
Fixes qunitjs#434
leobalter added a commit to leobalter/qunit that referenced this pull request Sep 8, 2014
Also logs skipped tests on testDone callback that are handled on
the html reporter

Fixes qunitjs#637
Fixes qunitjs#434
leobalter added a commit that referenced this pull request Sep 11, 2014
Also logs skipped tests on testDone callback that are handled on
the html reporter

Fixes #637
Fixes #434
Closes #652
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants