-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compile performance metrics on direct vs. queued execution #3
Comments
Here's some performance data I've gathered, based on the script located under "perf/subscription.js", executed on my rather old 2009 macbook:
Obviously it's slower to go through the microtask queue than to perform a direct function call, but it's hard for me to tell what to make of this. Since we pay the queuing cost only once per sequence, the averaged cost per delivery is inversely proportional to the size of the sequence. Whether we view the cost as significant or not seems to depend on what we choose to be a typical sequence size. @jhusain what do you think? |
Here we pay the queuing cost only once per subscription because there is one observable. All it takes is one flatMap on this observable that retrieves data from a data store with a cache, and now we have a schedule per notification. |
That's a common misconception, but not true. Once you're in the microtask queue, you've already paid the cost, and it won't be paid again. In other words the microtask queue is implemented as while (queue.length > 0) {
task = queue.pop();
task();
} If |
The cost I'm referring to here is the cost of unwinding the stack and winding it up again vs. just building up the stack. If the overhead here is indeed small, I'm open to asynchrony all the time. The acid test would be whether the scheduling was visible to the naked eye in a mouse drag or other gesture for example. If the scheduling introduces any latency it would be a serious problem as event composition is one of the key scenarios. Should be easy to test. |
We can't really do asynchronous delivery (i.e. per-iteration) because the data flow is two-way. Data for the iteration flows from the observable to the observer, and a completion value flows back down to the observable. That may be a thrown exception ("throw") or it may be |
Use d8 to test performance differences between delivering data to consumers using the microtask queue versus direct function calls. Ideally, we can get an average measurement for the time difference per delivery.
The text was updated successfully, but these errors were encountered: