-
Notifications
You must be signed in to change notification settings - Fork 44
Change value to values to improve performance #112
Comments
I'd like to point that that TypeScript's compiled version that's used in the article will not necessarily reflect actual implementation's performance on the matter. With that said I decided to test it on actual implementations with the code from the blog post: async function* asyncRange(from, to) {
for (let i = from; i < to; i++) {
yield i;
}
}
async function main() {
const start = Date.now()
// Prevent dead code elimination although I doubt
// there's any yet
let total = 0
for await (const i of asyncRange(0, 550000)) {
total += i
}
const end = Date.now()
console.log(`Total: ${ total }`)
console.log(`Time Taken: ${ end - start }`)
}
main() On the current implementations in Chrome (Dev Channel) and FireFox (Nightly) and the results are well pretty bad really, Chrome took 20s to complete that and FireFox was similarly bad taking 15s. However this sort've thing is suspicious to me so I tried running that in v8's test shell So what's the lesson here, basically the performance of Async Iteration is not fundamentally bad (the engines haven't even gotten to create the really good optimizations yet!), in the case of Chrome that's because first runs of code tend to take a slow interpreted path which is really bad for performance. I'd expect similar-ish performance to be possible for the other engines as well. Ultimately if implementors can't get the performance to anything reasonable then changes might need to be made, but I'd imagine this should be possible. @caitp , @arai-a , @gskachkov Any thoughts on this performance issue? |
@Jamesernator
The result is 2.6 second:
And I have big difference in comparison to Google Chrome
We landed async iterator recently(the day before yesterday), so is it possible for you to double check my result and my slightly modified script on latest WebKit nightly on your environment ? |
Surely, not so much optimization have been applied to Async Generator and for-await-of on Firefox. filed https://bugzilla.mozilla.org/show_bug.cgi?id=1393712 |
It's hard for me to see how an async generator would bunch things up in the Throughout the various design iterations of this proposal, a I think the right thing to do is for implementations to work further on optimizations. For one, in V8, @caitp is currently working on improving async iteration performance. |
I'm guessing (and I mean guessing, I don't have much to base this on) that the limiting factor is how fast the js engine can complete one async task, so a test would be for example: async function iterate(v){
console.time('iterate');
for(let i=0; i<v; i++){
await Promise.resolve(); //comment out this line for sync iterator
}
console.timeEnd('iterate');
}
iterate(550000);
// in Node v8.1.2
// 216.198ms (with await)
// 11.389ms (without await) So there is a difference (obviously), but maybe it's not as bad as the article implies |
I would like to note that the actual article starts from a wrong assumptions: streams are slow. I still have to check if our implementation of async iterators suffer from the same problems. If any of you have time please go ahead. I'll be blocked until the 4th of September. |
I put together a prototype of some optimizations we've been looking at. In the very simple microbenchmark above, with 550000 iterations it typically gets a 10-20ms speedup. With 5500000 iterations, it gets a significantly faster (many seconds faster). This approach still needs a lot of work/tuning, but it seems like a good start towards improving life for async/await users. |
@caitp do you have a patch that I can apply to Node to test this? I would like to do some testing myself. IMHO the best value for the Node community is if we get this extremely fast with streams. I'm traveling next week (so there is no hurry). |
its not ready for showtime yet |
also currently it only affects async function await, not async generators |
I know that just about everyone speaking in this thread already knows, but for everyone else reading in the future: Despite referring to Node streams, the article also does not address the future WHATWG Streams standard, which are inspired by Node streams and which eventually would use async iterators. WHATWG streams read or write chunks and await-yield chunks, and those chunks would themselves be collections of values. This appears to fulfill the explicit batching for which the article asks. WHATWG streams would serve as a standard performant specialization of async iterators, in the same way that arrays serve as a standard performant specialization of sync iterators. |
Article author here. Glad my post prompted a discussion on the proposal, that was really the goal all along :) Based on all the responses, I no longer think my proposal (values instead of value, or alternatively yielding an iterable from an async generator instead of a value) makes sense. My use case was reading a big file at server startup, in which case sync is fine. If for-await-of can be made nearly as fast as the synchronous version, that would be amazing! |
Based on this article.
This seems like a good idea for consumers, not sure how it would be for producers? Would it require yielding an iterator in an async generator for example?
The text was updated successfully, but these errors were encountered: