Skip to content
This repository has been archived by the owner on Jan 26, 2022. It is now read-only.

Change value to values to improve performance #112

Closed
mariusGundersen opened this issue Aug 25, 2017 · 13 comments
Closed

Change value to values to improve performance #112

mariusGundersen opened this issue Aug 25, 2017 · 13 comments

Comments

@mariusGundersen
Copy link

Based on this article.

This seems like a good idea for consumers, not sure how it would be for producers? Would it require yielding an iterator in an async generator for example?

@Jamesernator
Copy link

Jamesernator commented Aug 25, 2017

I'd like to point that that TypeScript's compiled version that's used in the article will not necessarily reflect actual implementation's performance on the matter.

With that said I decided to test it on actual implementations with the code from the blog post:

async function* asyncRange(from, to) {
  for (let i = from; i < to; i++) {
    yield i;
  }
}

async function main() {
  const start = Date.now()
  // Prevent dead code elimination although I doubt
  // there's any yet
  let total = 0
  for await (const i of asyncRange(0, 550000)) {
    total += i
  }
  const end = Date.now()
  console.log(`Total: ${ total }`)
  console.log(`Time Taken: ${ end - start }`)
}

main()

On the current implementations in Chrome (Dev Channel) and FireFox (Nightly) and the results are well pretty bad really, Chrome took 20s to complete that and FireFox was similarly bad taking 15s.

However this sort've thing is suspicious to me so I tried running that in v8's test shell d8 and unsurprisingly I got around 500ms which is tremendously faster than in the browser. From this suspicion I checked in Chrome again refreshing the page then it started taking 500ms just like d8 did so clearly Chrome's optimizer hadn't run the first time I ran it.


So what's the lesson here, basically the performance of Async Iteration is not fundamentally bad (the engines haven't even gotten to create the really good optimizations yet!), in the case of Chrome that's because first runs of code tend to take a slow interpreted path which is really bad for performance. I'd expect similar-ish performance to be possible for the other engines as well.

Ultimately if implementors can't get the performance to anything reasonable then changes might need to be made, but I'd imagine this should be possible.

@caitp , @arai-a , @gskachkov Any thoughts on this performance issue?

@gskachkov
Copy link

gskachkov commented Aug 25, 2017

@Jamesernator
I've tested in Webkit Nightly, but slightly modified script because we still do not implemented for await:

async function* asyncRange(from, to) {
  for (let i = from; i < to; i++) {
    yield i;
  }
}

async function main() {
  const start = Date.now()
  // Prevent dead code elimination although I doubt
  // there's any yet
  let total = 0;
  const iter = asyncRange(0, 550000); 
  while (true) {
    const result = await iter.next();
    if (result.done) break;
    total += result.value;
  }
  const end = Date.now()
  console.log(`Total: ${ total }`)
  console.log(`Time Taken: ${ end - start }`)
}

main()

The result is 2.6 second:

[Log] Total: 151249725000
[Log] Time Taken: 2639

And I have big difference in comparison to Google Chrome

Total: 151249725000
Time Taken: 32188

We landed async iterator recently(the day before yesterday), so is it possible for you to double check my result and my slightly modified script on latest WebKit nightly on your environment ?

@arai-a
Copy link
Contributor

arai-a commented Aug 25, 2017

Surely, not so much optimization have been applied to Async Generator and for-await-of on Firefox.

filed https://bugzilla.mozilla.org/show_bug.cgi?id=1393712
I'll take a look shortly.

@littledan
Copy link
Member

It's hard for me to see how an async generator would bunch things up in the values property (unless you are expected pass an iterable to yield, or it's only populated by yield*). Explicit batching, as the author suggests, seems like a reasonable solution for some cases and is already possible.

Throughout the various design iterations of this proposal, a for-await-of loop on an async generator that yields non-Promises would always take a microtask queue turn. This is for the same reason that awaiting a non-Promise takes a microtask queue turn--if we sometimes made it synchronous, programmers could depend on it in unstable ways. The values proposal breaks that guarantee purposely, on the hypothesis that Promises are not optimizable.

I think the right thing to do is for implementations to work further on optimizations. For one, in V8, @caitp is currently working on improving async iteration performance.

@mariusGundersen
Copy link
Author

I'm guessing (and I mean guessing, I don't have much to base this on) that the limiting factor is how fast the js engine can complete one async task, so a test would be for example:

async function iterate(v){
  console.time('iterate');
  for(let i=0; i<v; i++){
    await Promise.resolve(); //comment out this line for sync iterator
  }
  console.timeEnd('iterate');
}
iterate(550000);
// in Node v8.1.2
// 216.198ms (with await)
//  11.389ms (without await)

So there is a difference (obviously), but maybe it's not as bad as the article implies

@mcollina
Copy link

I would like to note that the actual article starts from a wrong assumptions: streams are slow.
https://github.com/mafintosh/csv-parser is as fast as the python example.

I still have to check if our implementation of async iterators suffer from the same problems. If any of you have time please go ahead. I'll be blocked until the 4th of September.

@mcollina
Copy link

@caitp
Copy link

caitp commented Aug 26, 2017

I put together a prototype of some optimizations we've been looking at.

In the very simple microbenchmark above, with 550000 iterations it typically gets a 10-20ms speedup. With 5500000 iterations, it gets a significantly faster (many seconds faster).

This approach still needs a lot of work/tuning, but it seems like a good start towards improving life for async/await users.

@mcollina
Copy link

@caitp do you have a patch that I can apply to Node to test this? I would like to do some testing myself.

IMHO the best value for the Node community is if we get this extremely fast with streams.

I'm traveling next week (so there is no hurry).

@caitp
Copy link

caitp commented Aug 26, 2017

its not ready for showtime yet

@caitp
Copy link

caitp commented Aug 26, 2017

also currently it only affects async function await, not async generators

@js-choi
Copy link

js-choi commented Aug 27, 2017

I know that just about everyone speaking in this thread already knows, but for everyone else reading in the future: Despite referring to Node streams, the article also does not address the future WHATWG Streams standard, which are inspired by Node streams and which eventually would use async iterators. WHATWG streams read or write chunks and await-yield chunks, and those chunks would themselves be collections of values. This appears to fulfill the explicit batching for which the article asks.

WHATWG streams would serve as a standard performant specialization of async iterators, in the same way that arrays serve as a standard performant specialization of sync iterators.

@danvk
Copy link

danvk commented Aug 28, 2017

Article author here. Glad my post prompted a discussion on the proposal, that was really the goal all along :)

Based on all the responses, I no longer think my proposal (values instead of value, or alternatively yielding an iterable from an async generator instead of a value) makes sense. My use case was reading a big file at server startup, in which case sync is fine.

If for-await-of can be made nearly as fast as the synchronous version, that would be amazing!

@domenic domenic closed this as completed Oct 24, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants