Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile performance metrics on direct vs. queued execution #3

Closed
zenparsing opened this issue May 15, 2015 · 5 comments
Closed

Compile performance metrics on direct vs. queued execution #3

zenparsing opened this issue May 15, 2015 · 5 comments

Comments

@zenparsing
Copy link
Member

Use d8 to test performance differences between delivering data to consumers using the microtask queue versus direct function calls. Ideally, we can get an average measurement for the time difference per delivery.

@zenparsing
Copy link
Member Author

Here's some performance data I've gathered, based on the script located under "perf/subscription.js", executed on my rather old 2009 macbook:

[Microtask subscription]
Subscriptions: 100000
Sequence Size: 10
Deliveries: 1000000
Time: 8423ms

[Synchronous subscription]
Subscriptions: 100000
Sequence Size: 10
Deliveries: 1000000
Time: 2774ms

Delta: 5649ms
Delta/subscription: 0.05649ms
Delta/delivery: 0.005649ms

Obviously it's slower to go through the microtask queue than to perform a direct function call, but it's hard for me to tell what to make of this.

Since we pay the queuing cost only once per sequence, the averaged cost per delivery is inversely proportional to the size of the sequence. Whether we view the cost as significant or not seems to depend on what we choose to be a typical sequence size.

@jhusain what do you think?

@jhusain
Copy link
Collaborator

jhusain commented May 20, 2015

Here we pay the queuing cost only once per subscription because there is one observable. All it takes is one flatMap on this observable that retrieves data from a data store with a cache, and now we have a schedule per notification.

@domenic
Copy link
Member

domenic commented May 26, 2015

That's a common misconception, but not true. Once you're in the microtask queue, you've already paid the cost, and it won't be paid again. In other words the microtask queue is implemented as

while (queue.length > 0) {
  task = queue.pop();
  task();
}

If task throws an exception, you might pay a hit, as you have to back out and go back through the queue again. But the cost for a single microtask subscription is the same as for multiple---namely, it's the cost of letting the rest of the synchronous code that's planning to run this turn, run.

@jhusain
Copy link
Collaborator

jhusain commented May 26, 2015

The cost I'm referring to here is the cost of unwinding the stack and winding it up again vs. just building up the stack. If the overhead here is indeed small, I'm open to asynchrony all the time. The acid test would be whether the scheduling was visible to the naked eye in a mouse drag or other gesture for example. If the scheduling introduces any latency it would be a serious problem as event composition is one of the key scenarios. Should be easy to test.

@zenparsing
Copy link
Member Author

We can't really do asynchronous delivery (i.e. per-iteration) because the data flow is two-way. Data for the iteration flows from the observable to the observer, and a completion value flows back down to the observable. That may be a thrown exception ("throw") or it may be { done: true} ("return"), or it may just carry some value (a backpressure signal, for instance). But we need to maintain the current stack in order to receive that data from the observer.

@zenparsing zenparsing changed the title Compile performance metrics on direct vs. microtask execution Compile performance metrics on direct vs. queued execution May 29, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants