-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change "pending" behavior ("this.skip()"); closes #2286 [DO NOT MERGE] #2571
Conversation
@@ -541,7 +536,7 @@ Runner.prototype.runTests = function (suite, fn) { | |||
if (test.isPending()) { | |||
self.emit('pending', test); | |||
self.emit('test end', test); | |||
return next(); | |||
return self.hookUp('afterEach', next); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because we may need to clean up after some "before each"
@@ -567,10 +562,6 @@ Runner.prototype.runTests = function (suite, fn) { | |||
} | |||
self.emit('test end', test); | |||
|
|||
if (err instanceof Pending) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
likewise; we may need to clean up after a "before each"
You sure about this? While I can't think of any obvious use cases either way, my gut reaction is I'd expect |
It seems right to me; speaking for myself here's what I'm thinking:
|
Well, I'm not sure what other people's use cases for My real use case is a test for whether my code interacts correctly with browser behavior relating to focusing a textarea. Unfortunately, the browser behavior I'm testing against only happens when the browser window is focused, so if I start my tests and Cmd-Tab away, I'd like for that test to be marked neither passed nor failed, just skipped. (This is not as corner case-y as it sounds; opening Developer Tools also causes the page to lose focus, even though the browser tab is focused. If you're curious, this is the test in question, note that it has to manually teardown before calling Now, I currently only have the one test with this fairly corner case requirement, but it's easy to imagine having this requirement for multiple such tests, like to test my code against different aspects of this browser behavior. It'd sure be nice, if I Cmd-Tabbed away and then back, to only skip just the tests run while the browser window was unfocused. If I'd also like to propose forbidding I think this results in something easier to document and understand:
|
There is a use case for
I'm of mixed minds on On the one hand, I can't help but suspect there's a cleaner way to handle the browser focus example. What good is it to run the tests if they may or may not be ignored based on unrelated actions being taken at the same time? If it's running on CI that will either never be an issue or always be an issue; but running locally and opening the console... shouldn't the tests be paused for debugging somehow, or something like that, instead of changing the scope of the current test run? And if I had a more general case of needing to run something locally but needing to be able to On the other hand, the only disadvantages to running subsequent tests (along with their I guess I'll let you both know if I have any new thoughts or change my mind; I do think this isn't necessarily as trivial as it seems one way or the other. |
Thanks for the input @laughinghan and @ScottFreeCode.
"Skip" and "only" are basically the bane of my existence rn. Happily, I haven't heard too much grousing about "grep" for awhile since v3.0. I'll agree it needs more thought and input from others. |
@laughinghan I have a few comments.
The cat is already out of the bag.
Given what you're using describe('my finicky browser', function () {
beforeEach(function (done) {
if (!document.hasFocus()) {
this.timeout(0);
const t = setInterval(() => {
if (document.hasFocus()) {
clearInterval(t);
done();
}
}, 500);
} else {
done();
}
});
it('should do something when the browser is ready', function () {
// ...
});
}); |
@boneskull: Thanks for commenting.
Fair enough
If I'm understanding correctly you're suggesting pausing my tests while my tab is backgrounded? That's not what I want, my whole test suite takes like 30s to run, which is fast enough to run locally in full (rather than only in pieces locally, and only in full on a server), but slow enough I don't want to twiddle my thumbs staring at it while it runs, I want to background it and check my email or Slack or something, and have it still run to completion, skipping the focus tests that it can't run. |
@ScottFreeCode: Thanks for replying, sorry it took so long for me to get back to this. TL;DR: just merge it, I want it fixed more than I care about these details
👍
No you sound like you understand it
Sure that would be The Right Thing but that's just so much more work than
I actually do run them on Sauce, and I could probably find some way to always disable this test locally, but, isn't it better to do "feature testing" and run this test whenever possible, rather than having to manually enable it locally?
Nah, this is my blue moon, I just want to make sure my blue moon has a test case. Also to be clear, it's not weird for rare random things to cluster, 'cos like, what are the chances they're evenly spaced, right? Did y'all track down the culprit in each of those cases and find out what technique they're using is brittle in this way, or? I'm pretty curious now, it def didn't happen like that for me, I'm still pretty weirded out that it's a problem for me at all.
Really? I think this is pretty trivial. I would way rather this be merged as-is, or with your modifications and zero of my suggestions accepted, than spend weeks and weeks more debating this. As-is (or with your modifications), I wouldn't be able to do function testThatRequiresFocus(name, fn) {
test(name, function(done) {
if (!document.hasFocus()) this.skip();
return fn(done);
});
} instead, it's really not that inconvenient. |
@laughinghan It's not in the project's best interests to move fast and break things. It's trivial in the sense that the changes are few. It's non-trivial in that these changes will impact many users. While it's a "bug fix", it's also going to break some tests that rely on the buggy behavior. And we're going to get more issues complaining about the broken tests, challenging why it was changed, how it was changed, and when it was changed. Unfortunately there's not much participation in PRs like this before they are published, for many reasons; the most obvious being that it'd require users to watch this entire repo. For all the people that this change will impact, how many have I heard from? I can answer that, but I can't answer how many users will be affected by this change. Even if it's a "bug fix"--and some are sure to argue it isn't--we need to be crystal-clear with the answers to why, how, and when. @ScottFreeCode seems to think the change is a good idea, and you are saying it's better than nothing. I'm inclined to think your proposal (or a portion there of) is a better one--yet your use case is unusual at best, and a misuse of the feature at worst. Where's the story that we can all nod our heads in agreement on? It's better to break tests once and be confident with the direction--instead of breaking them, realizing we made a mistake, then potentially breaking them again. But we can pull the "beta" card. 😈 I'm going to modify this PR to exhibit the behavior you suggest--and re-target to a different branch we'll prerelease, tagged |
I don't think we should move fast and break things either! But it's been almost 9 months since I first reported this issue. Since then there's been one related ticket and no comments by additional users on either ticket. I don't think this is particularly fast, nor likely to break things for many users at all :)
Are they sure to? I think my bug report, at least, that
👍
🎉 🎊 🎈 💥 😈 |
I'm quite time-constrained this weekend, sorry. I created a task to check the discussion on Monday. 🙇🏻 |
That's interesting. I didn't imagine your case since the condition is not constant. (If you assume the condition is constant, skipping all remaining tests makes sense)
If
Good point.
Maybe you can stop subsequent tests by throwing an Error in the beforeEach? I can't recall the behavior. |
Okay, and now my opinion on the behavior. Trying to think what would be the most simple and logic behavior.
You skip a test/suite to avoid running it AND to mark it as pending. Skipping a test/suite after running it is a bit counter logical. The only purpose would be marking it as pending. Which is not very clean because you've already emitted Therefore I think |
Any opinions on deprecating
I agree that there's probably no real use case for a |
I wonder how could we get more opinions on the point. It's a shame we don't have a twitter account with followers. Do we even have a twitter account? Feels like a good platforms to get opinions, make announcements, polls ... Deprecating looks good to me, just hoping we don't miss any valid use cases. @boneskull what do you think about my proposed |
yes, that's what this doing now re beforeEach, right? the new PR anyway |
@boneskull Sorry, I didn't notice a new PR was created. 👌 |
late to the party but, imho, skip in beforeEach should only skip current test and not all subsequent tests. And a skip in beforeEach should either skip only the currentTest, or skip all subsequent [before|after]Each and currentTest. I do agree skip in after/afterEach doesn't make much sense. I have a use case where we need an automated way of maintaining a list of failing tests and skipping them during test execution. Instead of updating the specs with it.skip() or xit(), it's easier to maintain this information in a single separate file. We can then implement a single beforeEach() in root suite which checks list of failing tests and if this.currentTest.title is on that list, the test is skipped. |
I don't think this is actually fixed for the
When I skip tests in an As you can see, |
To clarify, I'm skipping a test by calling Lines 671 to 673 in ade8b90
|
This is a proposal for how
this.skip()
should work. Related: #2286 and #2148.When
this.skip()
is called in a:I feel like the intent of
this.skip()
is more of a "permanent" directive; if you need to temporarily disable things, you can useit.skip()
,describe.skip()
, etc. This is why we should tear down if we've set up. It becomes the user's responsibility to be explicit about which hooks should be skipped, if there are multiple of the same type.My tests here are a bit crude (assertions in an "after all" hook?!), and possibly should live elsewhere. Also, we need assertions against "nested" suites, but I was about at my limit for abstracting these tests.