Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change "pending" behavior ("this.skip()"); closes #2286 [DO NOT MERGE] #2571

Closed
wants to merge 1 commit into from

Conversation

boneskull
Copy link
Contributor

@boneskull boneskull commented Nov 3, 2016

This is a proposal for how this.skip() should work. Related: #2286 and #2148.

When this.skip() is called in a:

  • "before all" hook: Skip all tests and all "each" hooks. Continue to run "after all" hooks.
  • "before each" hook: Skip subsequent tests. Continue to run all remaining hooks.
  • test: Skip just this test. Continue to run all remaining hooks.
  • "after each" hook: Skip subsequent tests. Continue to run all remaining hooks.
  • "after all" hook: Skip this just hook. Continue to run all remaining hooks.

I feel like the intent of this.skip() is more of a "permanent" directive; if you need to temporarily disable things, you can use it.skip(), describe.skip(), etc. This is why we should tear down if we've set up. It becomes the user's responsibility to be explicit about which hooks should be skipped, if there are multiple of the same type.

My tests here are a bit crude (assertions in an "after all" hook?!), and possibly should live elsewhere. Also, we need assertions against "nested" suites, but I was about at my limit for abstracting these tests.

@@ -541,7 +536,7 @@ Runner.prototype.runTests = function (suite, fn) {
if (test.isPending()) {
self.emit('pending', test);
self.emit('test end', test);
return next();
return self.hookUp('afterEach', next);
Copy link
Contributor Author

@boneskull boneskull Nov 3, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because we may need to clean up after some "before each"

@@ -567,10 +562,6 @@ Runner.prototype.runTests = function (suite, fn) {
}
self.emit('test end', test);

if (err instanceof Pending) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

likewise; we may need to clean up after a "before each"

@laughinghan
Copy link

  • "before each" hook: Skip subsequent tests. Continue to run all remaining hooks.

...

  • "after each" hook: Skip subsequent tests. Continue to run all remaining hooks.

You sure about this? While I can't think of any obvious use cases either way, my gut reaction is I'd expect this.skip() during setup/before each to skip just this test, and this.skip() during teardown/after each to skip just this hook. After all, before all doesn't skip all subsequent suites after the current suite.

@ScottFreeCode
Copy link
Contributor

It seems right to me; speaking for myself here's what I'm thinking:

beforeEach should do (must do? is there even a way for it to find out which is the current test in order to behave otherwise?) the same thing for all tests, so if it calls skip once, it should always call it, right?

afterEach that reasoning doesn't apply -- the tests would have already run before it -- but on the other hand, if all it skipped was that hook, it'd basically be equivalent to return. On the other hand, under the proposed logic, if you call it unconditionally in afterEach you basically skip all but the first test... But then again, it might be appropriate to call conditionally: if the cleanup process discovers something that renders the remaining tests moot.

@laughinghan
Copy link

laughinghan commented Nov 17, 2016

@ScottFreeCode:

beforeEach should do ... the same thing for all tests

Well, I'm not sure what other people's use cases for this.skip() are, but consider the following hypothetical use case that is based on my real use case.

My real use case is a test for whether my code interacts correctly with browser behavior relating to focusing a textarea. Unfortunately, the browser behavior I'm testing against only happens when the browser window is focused, so if I start my tests and Cmd-Tab away, I'd like for that test to be marked neither passed nor failed, just skipped.

(This is not as corner case-y as it sounds; opening Developer Tools also causes the page to lose focus, even though the browser tab is focused. If you're curious, this is the test in question, note that it has to manually teardown before calling this.skip() because of the bug we're trying to fix here: mathquill/mathquill@fe6a044?w=1#diff-8cf93f4c55dd44efc08701187c372909R163 )

Now, I currently only have the one test with this fairly corner case requirement, but it's easy to imagine having this requirement for multiple such tests, like to test my code against different aspects of this browser behavior. It'd sure be nice, if I Cmd-Tabbed away and then back, to only skip just the tests run while the browser window was unfocused. If this.skip() skips all subsequent tests, then to do so I would need to put if (!document.hasFocus()) this.skip(); in every single test; with my proposed behavior, that could be put in just one setup/beforeEach clause.

I'd also like to propose forbidding this.skip() in "after" hooks, I really don't see any possible use case and we shouldn't add features we see no use case for.

I think this results in something easier to document and understand:

  • "before all" hook: Skip "each" hooks and tests in this suite. Run remaining non-"each" hooks.
  • "before each" hook or test: Skip this test. Run remaining hooks.
  • "after each" or "after all" hook: Forbidden. ☠️

@ScottFreeCode
Copy link
Contributor

There is a use case for afterEach that's distinct from return, though, if you don't start from the limitation that it can only affect the current test: skipping the rest of the tests if some problem is discovered after any of them complete. I'm not sure I have a real example of this, but I expect it would mostly come up with sanity checks... or something like that?

after, on the other hand, I can't think of any reason to use this.skip instead of return, except maybe as a cheat to break out of some kind of non-trivial, non-easily-short-circuitable callback structure that should probably be considered a serious code smell anyway (using exceptions specifically to evade actual structuring is rather goto-like), or except perhaps blind/meaningless consistency with tests and other hooks. I suppose the latter might be a reason to be graceful or at least helpful when people use it anyway... On the other hand, if there's no "real" reason to use it, then it's possible the only reason most people would use it is a false expectation that it will do something it in fact does not, in which case we'd actually want to warn them about that. Recommendation: this.skip() in an after hook is an error, but with a clear message that it doesn't do anything return wouldn't rather than simply leaving the skip property on the context object undefined and getting a cryptic JS "not a function" error.

I'm of mixed minds on beforeEach.

On the one hand, I can't help but suspect there's a cleaner way to handle the browser focus example. What good is it to run the tests if they may or may not be ignored based on unrelated actions being taken at the same time? If it's running on CI that will either never be an issue or always be an issue; but running locally and opening the console... shouldn't the tests be paused for debugging somehow, or something like that, instead of changing the scope of the current test run? And if I had a more general case of needing to run something locally but needing to be able to Cmd+Tab away from the test window, personally I'd rather find a way to do it that would make those tests run anyway (open a dedicated browser in the background and ensure that tests work if the browser never has focus to begin with, or perhaps trigger the tests locally but run them on a dedicated server or service such as Sauce) instead of having those local runs not actually test stuff because to do so I have to actually sit and watch the browser while it happens. (Maybe I'm just not fully understanding the broader user behavior context and how the test run fits into it? I also could be biased because in my day job we're currently having problems due to other developers choosing to build websites in a way that they stop functioning correctly if they don't retain the browser focus, and I worry that it's some kind of trend since we recently went from only running into it once in a blue moon to running into three instances in the same week.)

On the other hand, the only disadvantages to running subsequent tests (along with their Each hooks) after a test is skipped in beforeEach are the inefficiency of running the hooks and throwing the Pending over and over even if beforeEach is going to this.skip() every time and the inconsistency with afterEach if afterEach is allowed for the one use case of stopping subsequent tests.

I guess I'll let you both know if I have any new thoughts or change my mind; I do think this isn't necessarily as trivial as it seems one way or the other.

@boneskull
Copy link
Contributor Author

Thanks for the input @laughinghan and @ScottFreeCode.

I guess I'll let you both know if I have any new thoughts or change my mind; I do think this isn't necessarily as trivial as it seems one way or the other.

"Skip" and "only" are basically the bane of my existence rn. Happily, I haven't heard too much grousing about "grep" for awhile since v3.0.

I'll agree it needs more thought and input from others.

@boneskull boneskull changed the title change "pending" behavior ("this.skip()"); closes #2286 change "pending" behavior ("this.skip()"); closes #2286 [DO NOT MERGE] Dec 2, 2016
@boneskull
Copy link
Contributor Author

boneskull commented Dec 2, 2016

@laughinghan I have a few comments.

I'd also like to propose forbidding this.skip() in "after" hooks, I really don't see any possible use case and we shouldn't add features we see no use case for.

The cat is already out of the bag.

I'd expect this.skip() during setup/before each to skip just this test

Given what you're using this.skip() for, this makes total sense. Yet, I can't help but think it's not the right tool for your job. Why not do something like:

describe('my finicky browser', function () {
  beforeEach(function (done) {
    if (!document.hasFocus()) {
      this.timeout(0);
      const t = setInterval(() => {
        if (document.hasFocus()) {
          clearInterval(t);
          done();
        }
      }, 500);
    } else {
      done();
    }
  });

  it('should do something when the browser is ready', function () {
    // ...
  });
});

@laughinghan
Copy link

@boneskull: Thanks for commenting.

The cat is already out of the bag.

Fair enough

Why not do something like: ...

If I'm understanding correctly you're suggesting pausing my tests while my tab is backgrounded? That's not what I want, my whole test suite takes like 30s to run, which is fast enough to run locally in full (rather than only in pieces locally, and only in full on a server), but slow enough I don't want to twiddle my thumbs staring at it while it runs, I want to background it and check my email or Slack or something, and have it still run to completion, skipping the focus tests that it can't run.

@laughinghan
Copy link

@ScottFreeCode: Thanks for replying, sorry it took so long for me to get back to this.

TL;DR: just merge it, I want it fixed more than I care about these details

Recommendation: this.skip() in an after hook is an error, but with a clear message that it doesn't do anything return wouldn't rather than simply leaving the skip property on the context object undefined and getting a cryptic JS "not a function" error.

👍

What good is it to run the tests if they may or may not be ignored based on unrelated actions being taken at the same time? If it's running on CI that will either never be an issue or always be an issue
...
Maybe I'm just not fully understanding the broader user behavior context and how the test run fits into it?

No you sound like you understand it

personally I'd rather find a way to do it that would make those tests run anyway (open a dedicated browser in the background and ensure that tests work if the browser never has focus to begin with, ...)

Sure that would be The Right Thing but that's just so much more work than if (!document.hasFocus()) this.skip(); and leaving my browser window focused every once in a while, right? (Or, until this bug fixed, if (!document.hasFocus()) { doTeardownStuff(); this.skip(); }.) I think I mentioned that this has only come up with this one test, not with multiple tests yet, and this only showed up years into a project and it's been months since then and I haven't had to add any more. So I think even if it were you in my shoes, you wouldn't think this is worth investing in either.

or perhaps trigger the tests locally but run them on a dedicated server or service such as Sauce

I actually do run them on Sauce, and I could probably find some way to always disable this test locally, but, isn't it better to do "feature testing" and run this test whenever possible, rather than having to manually enable it locally?

developers choosing to build websites in a way that they stop functioning correctly if they don't retain the browser focus, and I worry that it's some kind of trend since we recently went from only running into it once in a blue moon to running into three instances in the same week

Nah, this is my blue moon, I just want to make sure my blue moon has a test case.

Also to be clear, it's not weird for rare random things to cluster, 'cos like, what are the chances they're evenly spaced, right? Did y'all track down the culprit in each of those cases and find out what technique they're using is brittle in this way, or? I'm pretty curious now, it def didn't happen like that for me, I'm still pretty weirded out that it's a problem for me at all.

I do think this isn't necessarily as trivial as it seems one way or the other.

Really? I think this is pretty trivial. I would way rather this be merged as-is, or with your modifications and zero of my suggestions accepted, than spend weeks and weeks more debating this. As-is (or with your modifications), I wouldn't be able to do beforeEach(() => { if (!document.hasFocus()) this.skip(); }); like I suggested would be nice, but I could easily do:

function testThatRequiresFocus(name, fn) {
  test(name, function(done) {
    if (!document.hasFocus()) this.skip();
    return fn(done);
  });
}

instead, it's really not that inconvenient.

@boneskull
Copy link
Contributor Author

@laughinghan It's not in the project's best interests to move fast and break things.

It's trivial in the sense that the changes are few. It's non-trivial in that these changes will impact many users. While it's a "bug fix", it's also going to break some tests that rely on the buggy behavior. And we're going to get more issues complaining about the broken tests, challenging why it was changed, how it was changed, and when it was changed.

Unfortunately there's not much participation in PRs like this before they are published, for many reasons; the most obvious being that it'd require users to watch this entire repo.

For all the people that this change will impact, how many have I heard from? I can answer that, but I can't answer how many users will be affected by this change. Even if it's a "bug fix"--and some are sure to argue it isn't--we need to be crystal-clear with the answers to why, how, and when.

@ScottFreeCode seems to think the change is a good idea, and you are saying it's better than nothing. I'm inclined to think your proposal (or a portion there of) is a better one--yet your use case is unusual at best, and a misuse of the feature at worst. Where's the story that we can all nod our heads in agreement on?

It's better to break tests once and be confident with the direction--instead of breaking them, realizing we made a mistake, then potentially breaking them again.

But we can pull the "beta" card. 😈 I'm going to modify this PR to exhibit the behavior you suggest--and re-target to a different branch we'll prerelease, tagged mocha@beta (this means closing the PR)--unless @mochajs/core is opposed.

@boneskull boneskull closed this Dec 3, 2016
@laughinghan
Copy link

It's not in the project's best interests to move fast and break things. ... It's non-trivial in that these changes will impact many users.

I don't think we should move fast and break things either! But it's been almost 9 months since I first reported this issue. Since then there's been one related ticket and no comments by additional users on either ticket. I don't think this is particularly fast, nor likely to break things for many users at all :)

a "bug fix"--and some are sure to argue it isn't

Are they sure to? I think my bug report, at least, that this.skip() inside a test skips teardown() even though setup() was run, isn't likely to be controversial.

your use case is unusual at best, and a misuse of the feature at worst

👍

But we can pull the "beta" card.

🎉 🎊 🎈 💥 😈

@dasilvacontin
Copy link
Contributor

dasilvacontin commented Dec 3, 2016

I'm quite time-constrained this weekend, sorry. I created a task to check the discussion on Monday. 🙇🏻

@dasilvacontin
Copy link
Contributor

It'd sure be nice, if I Cmd-Tabbed away and then back, to only skip just the tests run while the browser window was unfocused. If this.skip() skips all subsequent tests, then to do so I would need to put if (!document.hasFocus()) this.skip(); in every single test; with my proposed behavior, that could be put in just one setup/beforeEach clause

That's interesting. I didn't imagine your case since the condition is not constant. (If you assume the condition is constant, skipping all remaining tests makes sense)

after, on the other hand, I can't think of any reason to use this.skip instead of return

If skip was able to log a "reason", that could be a use for it. At the same time, you could raise an Error I guess? But I don't recall what would be the effect on test/hook execution.

What good is it to run the tests if they may or may not be ignored based on unrelated actions being taken at the same time?

Good point.

On the other hand, the only disadvantages to running subsequent tests (along with their Each hooks) after a test is skipped in beforeEach are the inefficiency of running the hooks and throwing the Pending over and over even if beforeEach is going to this.skip() every time and the inconsistency with afterEach if afterEach is allowed for the one use case of stopping subsequent tests.

Maybe you can stop subsequent tests by throwing an Error in the beforeEach? I can't recall the behavior.

@dasilvacontin
Copy link
Contributor

dasilvacontin commented Dec 7, 2016

Okay, and now my opinion on the behavior. Trying to think what would be the most simple and logic behavior.

  • before. Runs once before all the tests and their each hooks. I imagine the context (this) to be the suite. Therefore, all the changes made through the context (this.timeout(), this.tag()) affect the suite. this.skip() skips the suite. ("after all" hooks are ran)

  • beforeEach. Runs once before each test. I imagine the context (this) to be the test. Therefore, all the changes made through the context (this.timeout(), this.tag()) affect the test. this.skip() skips the test. (afterEach/after hooks are ran)

  • afterEach. Runs once after each test. I imagine the context (this) to be the test. Therefore, all the changes made through the context (this.timeout(), this.tag()) affect the test. this.skip() skips the test. (remaining afterEach/after hooks are ran, since their before/beforeEach counterpart has ran)

  • after. Runs once after all the tests and their each hooks. I imagine the context (this) to be the suite. Therefore, all the changes made through the context (this.timeout(), this.tag()) affect the suite. this.skip() skips the suite. (remaining "after all" hooks are ran since their "before all" counterpart has ran)

You skip a test/suite to avoid running it AND to mark it as pending.

Skipping a test/suite after running it is a bit counter logical. The only purpose would be marking it as pending. Which is not very clean because you've already emitted pass/fail events for such test. Not sure there's value in changing the test state/outcome post-mortem. (<- feedback pls)

Therefore I think afterEach/after hooks shouldn't be able to skip their corresponding test/suite, and somehow mocha should let the user know that skip is not meant to be used in afterEach/after hooks, that whatever is being done should be done in before/beforeEach instead.

@boneskull
Copy link
Contributor Author

Any opinions on deprecating skip in after/afterEach?

  1. Soft deprecation (docs only) until 4.x
  2. Hard deprecation (print msg to STDERR) until 5.x
  3. Removed in 6.x

I agree that there's probably no real use case for a skip in these hooks.

@dasilvacontin
Copy link
Contributor

I wonder how could we get more opinions on the point. It's a shame we don't have a twitter account with followers. Do we even have a twitter account? Feels like a good platforms to get opinions, make announcements, polls ...

Deprecating looks good to me, just hoping we don't miss any valid use cases.

@boneskull what do you think about my proposed skip behavior in beforeEach? (skipping only the current test)

@boneskull
Copy link
Contributor Author

yes, that's what this doing now re beforeEach, right? the new PR anyway

@dasilvacontin
Copy link
Contributor

@boneskull Sorry, I didn't notice a new PR was created. 👌

@nullin
Copy link

nullin commented May 2, 2017

late to the party but, imho, skip in beforeEach should only skip current test and not all subsequent tests. And a skip in beforeEach should either skip only the currentTest, or skip all subsequent [before|after]Each and currentTest. I do agree skip in after/afterEach doesn't make much sense.

I have a use case where we need an automated way of maintaining a list of failing tests and skipping them during test execution. Instead of updating the specs with it.skip() or xit(), it's easier to maintain this information in a single separate file. We can then implement a single beforeEach() in root suite which checks list of failing tests and if this.currentTest.title is on that list, the test is skipped.

@philipwalton
Copy link

I don't think this is actually fixed for the it() case:

test: Skip just this test. Continue to run all remaining hooks.

When I skip tests in an it() block, I do not see the afterEach() hooks run. Here's a screenshot of my output:

Screen Shot 2019-03-27 at 6 22 06 PM

As you can see, afterEach is not being logged after the skipped test, but it being logged after the run test.

@philipwalton
Copy link

To clarify, I'm skipping a test by calling this.skip() within the it() block. I've tracked down where it happens in the code and if I comment out these lines everything works as expected:

mocha/lib/runner.js

Lines 671 to 673 in ade8b90

if (err instanceof Pending) {
return next();
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants