Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speeding up co by redesigning with OOP - thoughts? #107

Closed
hiddentao opened this issue Apr 10, 2014 · 15 comments
Closed

Speeding up co by redesigning with OOP - thoughts? #107

hiddentao opened this issue Apr 10, 2014 · 15 comments

Comments

@hiddentao
Copy link

I've been slowly working on a rewrite of co using OOP, see https://github.com/hiddentao/node-generator-perf-tests/tree/master/co-speedup.

I've been able to increase execution speed across nearly all the different types of yieldables, and particularly so for when yielding promises.

There is some redundancy in the new version - was necessary for optimising for performance. In any case, I'd be interested to hear your thoughts on it.

@yanickrochon
Copy link

I'd be interested to see how your implementation does against co's own tests. (Unless you already tested and everything is green?)

I have quickly looked at the implementation and it seems promising.

@jonathanong
Copy link
Contributor

cool. i think co is pretty stable now so it's time to talk about performance.

do you have a performance comparison with bluebird?

@hiddentao
Copy link
Author

I did a comparison earlier. You can find the test at https://github.com/hiddentao/node-generator-perf-tests/tree/master/co-vs-bluebird

Note that bluebird's coroutine isn't as flexible as co - it can't handle thunks or objects. You can't even yield generators (one has to use generator delegation - yield*). Thus the test only compares the yielding of promises.

The results of the above are:

Bluebird-Promise.coroutine (prepared) x 16.27 ops/sec ±1.80% (78 runs sampled)
co (prepared) x 5.79 ops/sec ±12.92% (33 runs sampled)

The refactored version I've done using OOP gets co up to around 11 ops/sec. But it cannot equal Bluebird's performance because we have to use a closure to bind the promise callback to ensure that we are returned to the right context, see https://github.com/hiddentao/node-generator-perf-tests/blob/master/co-speedup/co-classes.js#L81

Whereas bluebird is able to use its internal API to avoid the need to bind the function, see https://github.com/petkaantonov/bluebird/blob/master/src/promise_spawn.js#L80

@jonathanong
Copy link
Contributor

oh i was hoping the comparison was on the readme =O

i thought bluebird added those features. i guess not. bluebird doesn't even support yielding callbacks?

@hiddentao
Copy link
Author

Ah, I'll add the results into the README. By callbacks do you mean thunks?

@hiddentao
Copy link
Author

@yanickrochon The tests are the next thing on my list.

@tj
Copy link
Owner

tj commented Apr 11, 2014

most of the promise implementations I've tried benchmarking with Co are very slow in comparison with the alternatives, but as far as Co itself goes I'd be happy to have improvements! I haven't bothered to profile yet since we still get good numbers with Koa as-is

@hiddentao
Copy link
Author

Got it passing the current co tests. Working version in my branch: https://github.com/hiddentao/co/tree/oop_refactor

Performance is still holding up. I've also realised that at least some of the performance gain I've obtained is due to optimising the initial closure at https://github.com/hiddentao/co/blob/oop_refactor/index.js#L433. Array slice (and possibly pop too) is expensive so I try and minimise invocations of such.

@AutoSponge
Copy link
Contributor

The last time I tried to optimize this (http://jsperf.com/arguments-slice1), slice.call(arguments, 1) was able to get a modest gain by switching to Array.call.apply(Array, arguments).

Also, based on another of my old jsperfs, you can speed up the other slice call: http://jsperf.com/cost-of-bind

Try:

var args = [];
Array.prototype.push.apply(args, arguments);
return args;

But this makes me wonder if it wouldn't be better to create a performance-based preprocessor which could use the AST to replace bits as needed and change when the environment's VM decides to optimize for particular patterns.

@tj
Copy link
Owner

tj commented Apr 12, 2014

hrmmm yeah we shouldn't worry about micro-opts

@AutoSponge
Copy link
Contributor

Sorry, I wasn't suggesting it for co master, but for hiddentao's performance fork.

@petkaantonov
Copy link

i thought bluebird added those features. i guess not. bluebird doesn't even support yielding callbacks?

This is incorrect, It supports yielding literally anything. The docs contain boring examples like callbacks but the feature can be used to build transaction, context and resource managers for instance.

To be fair, it didn't always have this feature but when this info was posted it was already pretty old news :P

@hiddentao
Copy link
Author

@petkaantonov Point taken. At the time I was comparing the use of Promise.coroutine() with co() for the same generator routine and it wasn't working enough such that it could be a drop-in replacement. I didn't realise you could add support for yielding other other types of items.

@tj
Copy link
Owner

tj commented Jun 12, 2014

The thing is this will not be a bottleneck in any real application either, and if it is in your case I'd be all down for micro-optimizations but JavaScript people have a tendency to micro-optimize the hell out of pointless things that are already several millions of ops/s. So unless we can maintain elegance (I'm sure we can) and keep optimizations pretty simple, or prove that it is a real bottleneck it doesn't make much sense to me. It's like having a Lamborghini and driving it in town 50mph, it's flashy but pointless

@jonathanong
Copy link
Contributor

going to avoid micro opts for better readability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants