Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run the benchmarks #16

Open
benjamingr opened this issue Oct 2, 2015 · 18 comments
Open

Run the benchmarks #16

benjamingr opened this issue Oct 2, 2015 · 18 comments

Comments

@benjamingr
Copy link

Hey, can you run @spion's benchmarks that simulate a real server async workload and report the results?

Here it is: https://github.com/petkaantonov/bluebird/tree/master/benchmark

Cheers.

@ysmood
Copy link
Owner

ysmood commented Oct 10, 2015

I'm working on it, I will make PR to add Yaku to the benchmark list later.

@benjamingr
Copy link
Author

Thanks, please keep me updated.

@ysmood
Copy link
Owner

ysmood commented Oct 12, 2015

@benjamingr The benchmark of bluebird is strange:

file                   time(ms)  memory(MB)
promises-bluebird.js        393      111.57
callbacks-baseline.js       519       37.93

Darwin 15.0.0 x64
Node.JS 4.0.0
V8 4.5.103.30
Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz × 8

It's faster than the raw callback, it's impossible! I suspect the benchmark is not properly designed, I can't trust it before I find an explain.

@benjamingr
Copy link
Author

That makes sense, allocating a promise is cheaper than allocating a closure. You're very welcome to read the tests yourself and see. For example:

doOne(function(err, data){  if(err) return cb(err);
  doTwo(data, function(err, data2){  if(err) return cb(err);
      doThree(data2, function(err, data3){ if(err) return cb(err);
            cb(null, data3.foo);
       });
   });
});

Vs - doOne.then(doTwo).then(doThree) with regards to closures and allocations required.

What is possible is that the callbacks code can be sped up. Some work went into that by some people - but by all means - a PR would be appreciated.

@ysmood
Copy link
Owner

ysmood commented Oct 12, 2015

Of cause the closure of callback should be pre-allocated and cached, or it's not fair. I don't like the code of the benchmark, it's obscure and not clear for people to reason about what it really does.

@ysmood
Copy link
Owner

ysmood commented Oct 12, 2015

@benjamingr I haven't done much optimization, but yaku is pretty good for now:

file                                     time(ms)  memory(MB)
callbacks-baseline.js                         167       32.50
promises-bluebird-generator.js                226       33.71
promises-bluebird.js                          310       44.93
promises-tildeio-rsvp.js                      348       70.52
promises-cujojs-when.js                       380       67.51
callbacks-caolan-async-waterfall.js           497       77.46
promises-yaku.js                              530       94.87
promises-lvivski-davy.js                      600      125.66
promises-dfilatov-vow.js                      698      141.27
promises-calvinmetcalf-lie.js                 832      150.71
generators-tj-co.js                           889      143.92
promises-ecmascript6-native.js                918      194.81
promises-obvious-kew.js                      1131      237.62
promises-then-promise.js                     1529      214.70
promises-medikoo-deferred.js                 1857      157.93
observables-Reactive-Extensions-RxJS.js      2831      283.23
observables-pozadi-kefir.js                  3287      164.02
promises-kriskowal-q.js                     12506      829.64
observables-baconjs-bacon.js.js             18895      828.93
observables-caolan-highland.js              30631      530.52

Platform info:
Darwin 15.0.0 x64
Node.JS 4.0.0
V8 4.5.103.30
Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz × 8

@benjamingr
Copy link
Author

Ping @spion

@ysmood
Copy link
Owner

ysmood commented Oct 12, 2015

@spion
Copy link

spion commented Oct 12, 2015

@benjamingr yes?

@benjamingr
Copy link
Author

Well, the benchmarks were being criticized, thought you might want to address that

On 12 Oct 2015, at 21:02, Gorgi Kosev notifications@github.com wrote:

@benjamingr yes?


Reply to this email directly or view it on GitHub.

@spion
Copy link

spion commented Oct 12, 2015

Ah.

To be honest I've never seen callbacks run slower than (generatorless) promises in that benchmark. Sounds like V8 might've decided to bail out on optimising some function on that particular run for some reason.

Regarding the way the benchmark is written, it could indeed use some refactoring. Regarding the way the callback code is written, thats only the baseline in bluebird's fork - there are many more variants in the original

@ysmood
Copy link
Owner

ysmood commented Oct 22, 2015

@benjamingr I did some optimization, now Yaku is much better.

$ ./bench doxbee 

results for 10000 parallel executions, 1 ms per I/O op

file                                     time(ms)  memory(MB)
callbacks-baseline.js                         176       32.39
promises-bluebird-generator.js                192       34.05
promises-bluebird.js                          268       45.48
promises-cujojs-when.js                       373       67.90
*promises-yaku.js                             403       84.05
promises-tildeio-rsvp.js                      405       69.21
callbacks-caolan-async-waterfall.js           562       77.51
promises-lvivski-davy.js                      568      125.35
promises-dfilatov-vow.js                      640      140.65
promises-calvinmetcalf-lie.js                 693      150.50
promises-ecmascript6-native.js                825      188.38
generators-tj-co.js                          1077      140.99
promises-obvious-kew.js                      1298      266.98
promises-then-promise.js                     1439      201.00
promises-medikoo-deferred.js                 2142      264.46
observables-pozadi-kefir.js                  2624      164.00
observables-Reactive-Extensions-RxJS.js      3268      283.15
promises-kriskowal-q.js                     15118      871.54
observables-baconjs-bacon.js.js             20351      854.80
observables-caolan-highland.js              27834      528.66

Platform info:
Darwin 15.0.0 x64
Node.JS 4.2.1
V8 4.5.103.35
Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz × 8

@benjamingr
Copy link
Author

This is getting there! 1/2 the speed and 1/3 the memory usage of bluebird is very nice :) I still think there is room to go though. Definitely better than what core-js currently uses - if the API is compatible I'm for it :)

@winterland1989
Copy link

Can you give me some credits on b336a4d please : )
I thought this is basically the idea when i show you in my Action.js, and by the way, i give part of the original credits to bluebird in the code comment. since it's bluebird's hilarious new Function based solution inspired me. Thank you.

@ysmood
Copy link
Owner

ysmood commented Oct 24, 2015

@winterland1989 Very sorry for that, I meant to credit you when I finished all the optimization include the Promise.all, this post is just a beginning.

This trick is used by Action.js and many other libs like ramda:
https://github.com/ramda/ramda/blob/c6138eed1c57dca717c73bc656c7dcabc220ed92/src/internal/_curry3.js#L16

curry: https://github.com/dominictarr/curry/blob/11b4be8f092123ae7fc2df526ae9cac52f6feae7/curry.js#L12

But I don't think we should mention things like that everyday, because ramda and many other libs are already using this trick before, everybody knows it. But if you insist, I will be very sorry about that.

Like I told you the try catch trick, you don't have to tell everybody you are inspired by the bluebird, everybody is using it, they don't have to credit bluebird.

@winterland1989
Copy link

OK, thanks for explanations, and i'd like keep communicate with you on Promise.all stuff, i'm wrapping my head on it now : )

@ysmood
Copy link
Owner

ysmood commented Oct 24, 2015

@winterland1989 Looking forward your breaking through on the Promise.all :D

@winterland1989
Copy link

One thing to be noted that, this trick which try to cover some cases when arguments is less than given point cause way too much memory allocation, you can measure it by print system memory usage before Promise.all and after the promise array are built in the benchmark, at that given point, bluebird still use memory as low as possible, so i believe our trick is somehow inferior than new Function based solution. I'll keep you notified when i test it more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants