-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize nextTick #195
Optimize nextTick #195
Conversation
- Minimize the number of postMessage/setTimeout calls. - Remove structural changes on list nodes.
The problem with this approach is that it breaks subsequent handlers if one throws. |
It shouldn't. I use a preventive |
@domenic I added a comment that should clarify the approach. Feel free to suggest an better one. |
Here a performance test: http://jsperf.com/wqgrecrereffrre/3 |
To put in a better perspective, in test http://jsperf.com/wqgrecrereffrre/4 I also added the list optimization alone (pull191). |
Maybe test them against this version as well? I believe we were planning on replacing the current one with that, but your idea is intriguing also. |
@domenic That one will fail on IE 6 and 7. It does not support |
sigh. In that case, @kriskowal, thoughts? |
I would like to point out that while this approach do minimize latency of "ticks" in case of none thrown exceptions, it also can increase latency of tasks after a thrown exception. In case we wont to be less "optimistic", we can preemptively request n ticks, where n <= m and m is the number of tasks. |
- No need for loop - Comment grammar
With the last change, I think, I resolved the problem of the introduced latency on multiple thrown exceptions. 🎱 (is this the proper use of 8ball?) |
Ignore my last comment. Unfortunately the only way to completely eliminate the increased latency after thrown exceptions, is to make an tick request per task ( n >= m ). I created a new branch with such "pessimistic" approach. Implementation is even simpler (https://github.com/rkatic/q/compare/more-nextTick), with no apparent drop in performances (http://jsperf.com/wqgrecrereffrre/7). |
After some given thoughts, I come to the conclusion that the assumption, that uncaught exceptions will be rare in production, is too dangerous. It's not hard to imagine a progress listener start throwing exceptions with high frequency, considerably delaying resolutions of many promises. |
- pending -> queuedTasks - ticking -> pendingTicks - maxTicking -> maxPendingTicks - cycle -> usedTicks - n -> expectedTicks
@rkatic a very nice benchmark has been put together over in #206 by @francoisfrisch. But, you've clearly added a lot of thought about the "reticking" process. Care to give us an explanation of where it currently stands, how much it optimizes for pessimism vs. optimism, etc.? Could it be made faster by being more optimistic, for example? |
@domenic The problem that a naive "tick-reusage" solution has, is that in cases of thrown exceptions, subsequent tick is requested much later and sequentially. This is a problem, because tasks are queued, and all subsequent tasks after thrown exceptions will wait for the newly requested tick. My solution amortizes such costs. t: tick delay (~3ms on Firefox, 0-1ms on Chrome, ...) |
I think I’ll buy it. We can adjust as necessary, but it looks good to me. |
Landed. Really wanted this for v0.9. |
Thanks @rkatic ! |
I'm glad I could help, however, I would like to point out that the amortization algorithm is mostly relevant when the tick delay is significant, and that in future, if the usage of I am actually wondering if we could avoid the |
OK, this is perhaps an "over-optimization", but could still be relevant specially for old browsers, where
setTimeout
is used and promises are used during some animation.Currently if you run
Q.when(1).then().then().then().then()
,nextTick
will be called 13 times.With this change, that code will produce only one
postMessage/setTimeout
(+1 to survive eventual error), but still respecting the spec.Please, let me know if I am missing something.
NOTE: This pull would make pull-191 unnecessary.