New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add timeout functionality #654
Conversation
By analyzing the blame information on this pull request, we identified @sotojuan, @floatdrop and @ingro to be potential reviewers |
@@ -679,6 +680,20 @@ AVA automatically removes unrelated lines in stack traces, allowing you to find | |||
|
|||
<img src="media/stack-traces.png" width="300"> | |||
|
|||
### Global timeout | |||
|
|||
A global timeout can be set via `--timeout` option. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A global timeout can be set via the
--timeout
option.
Shouldn't this be implemented in the API? |
Hm, probably, I'll push an update. |
|
||
function onTimeout() { | ||
logger.finish(); | ||
console.log(' ' + colors.error(figures.cross) + ' Exited because no new tests completed within last ' + cli.flags.timeout + ' of inactivity.'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can remove the of inactivity
bit I think. Should add ms
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ms
can't be added, because https://github.com/sindresorhus/ava/pull/654/files#diff-0730bb7c2e8f9ea2438b52e419dd86c9R689
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah didn't read that closely enough 😄
The watcher should follow this behavior as well. Clearing and restarting timers can be expensive. Would be good to investigate the performance overhead. One simple solution may be to never clear the timer but track when the last test completed. Then the next timeout can be scheduled so it fires Timers may sometimes fire a little early. Presumably this is only an issue when a very short timeout value is used though. |
Could you provide any links backing this info? |
There's the infamous "It sounds crazy, but disabling npm's progress bar yields a 2x npm install speed improvement for me" issue from a little while ago. See the 3.7.0 release notes and iarna/gauge@a7ab9c9. Hard to say what the actual impact is on us of course. |
}; | ||
|
||
Api.prototype._onTimeout = function () { | ||
var message = 'Exited because no new tests completed within last ' + this.options.timeout + 'ms of inactivity'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
completed within the last
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vdemedes bump 😄
@vdemedes Can you fix the merge conflict? |
Also discovered a problem with the timeout handling and its test being faulty: import test from 'ava';
test.cb(t => {
setTimeout(t.end, 10000);
});
This will output the error message after 10 seconds because of the |
@sindresorhus Sure, will rebase tomorrow. Will try to fix it, good catch ;) |
Does the |
It should. Docs here. I would love a second set of eyeballs on our benchmark tooling! 😜 |
I ran the benchmarks with 8d47119 and https://github.com/sindresorhus/ava/tree/timeout-no-clear. For the timeout tests I added This PR seems to be the slowest overall. The
|
@novemberborn Could you please explain the behavior and code behind that |
@vdemedes timers are somewhat expensive to create. Rather than replacing an existing timer whenever a test completes I propose to let the existing timer run to completion. Then if tests have completed in the meantime we create a new timer. The delay for this new timer should be the configured timeout minus the time since the last test completed. This gets us the desired behavior without creating an unnecessary amount of timers. This is what the |
Anyone has ideas on how to fix this - #654 (comment)? The only way I see is
@sindresorhus expect that it won't exit and |
@vdemedes yea, I think we'd have to explicitly kill the forks and flag any remaining tests as having timed out. Of course they could be serial async tests in which case they wouldn't even have started but that nuance is probably not relevant. |
@novemberborn in
So we would need to calculate those intervals and take them into account. Sounds a bit over-complex for such a simple thing. I pushed a new technique, ridiculously simple - debouncing. Instead of restarting timer after every test, this function is debounced with 200ms interval, so that it's not fired often. I believe it should improve things a lot. It is simple and understandable, no black magic involved. Let me know what you think ;) |
|
@@ -180,6 +186,25 @@ Api.prototype._prefixTitle = function (file) { | |||
return prefix; | |||
}; | |||
|
|||
Api.prototype._restartTimer = debounce(function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no need for this function using debounce
.
Just do the following in the constructor:
if (this.options.timeout) {
this._onTimeout = debounce(this._onTimeout, ms(this.options.timeout));
} else {
this._onTimeout = noop;
}
I think that will be easier to understand (and faster) than the current implementation or the no-clear
branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ha of course. It implements the approach I was advocating. Missed it due to context switching 😊
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, you shouldn't use debounce
on a prototype method, as it will debounce across every instance and only call on the most recent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point of the latest update is to create less timers. That's why debounce
should be on _restartTimer()
.
Also, you shouldn't use debounce on a prototype method, as it will debounce across every instance and only call on the most recent.
Agree.
Even with I've raised an issue: sindresorhus/debounce#9 |
This is not a problem, because we explicitly call |
@vdemedes I think you can use
It will be a problem with the watcher. If we can't get cancel support upstream we'd need to track some state in the debouncer callback so it doesn't abort tests from newer test runs. |
In that case we have 3 choices:
I'm for 3. |
PR updated. I re-implemented stats calculation in Runner, because it returned zero I squashed all the timeout-related commits and left runner-related commit "outside". I'd like to keep it that way to maintain clear history. |
@@ -13,8 +13,10 @@ var commonPathPrefix = require('common-path-prefix'); | |||
var resolveCwd = require('resolve-cwd'); | |||
var uniqueTempDir = require('unique-temp-dir'); | |||
var findCacheDir = require('find-cache-dir'); | |||
var debounce = require('lodash.debounce'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does that compare to installing lodash
and using the bellow?
var debounce = require('lodash/fp/debounce');
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lodash.debounce
contains only debounce()
function, while lodash
module contains all of them.
LGTM |
@vdemedes #654 (comment) |
Updated ;) |
Yes - ship |
Why is ae45752 in this PR? |
@jamestalmage Explained in #654 (comment) |
Whoah that's a young DiCaprio 😲 I'm confused by ae45752. These counts are already computed in the |
Yes, exactly! When test worker needed to exit before tests completed, all stats were zero. So I implemented "on-demand" calculation of stats. @sindresorhus Thanks :) |
Can we remove the "on-test-complete" stats calculation then? |
@novemberborn Yes, definitely, I guess I forgot to remove it myself. |
Follow-up to #654. No longer collecting stats for every test that completes. Explicitly return `null` when no exclusive tests could be run.
@vdemedes see #728 😄 |
Follow-up to #654. No longer collecting stats for every test that completes. Explicitly return `null` when no exclusive tests could be run.
Is there a way to see which tests exactly timed out? |
This PR fixes #8 and and #171.
Timeout can be set via
--timeout
or-T
option. Values can be either numbers or set in a human-readable form:Timer resets after each completed test. So if there was 2 seconds (for example) of inactivity (no new test results), AVA exits with the following error message:
P.S.
test/fixture/long-running.js
was renamed totest/fixture/slow-exit.js
, because the new name is more appropriate for the represented test case.