New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We need a performance suite #312
Comments
\o/ |
Yes, and once we’ve got something that generates good numbers, we should regress it over our commit log and see how we’ve been doing. |
I have tests in a branch https://github.com/kriskowal/q/tree/perf but am not sure they're any good. This match thing was way easier to use than BenchmarkJS, but doesn't run in browsers. And I think we'll need to write a custom reporter anyway to get relevant data (e.g., we care about the quotient of Q to setImmediate rather than how many ops/s each takes). |
My perf tests are rudimentary and should eventually be replaced by something much better, but after breakfast I think I'll merge them in just to give us something. |
Sounds good to me. |
OK, merged in to master. Let's please use this thread for ideas on additional performance tests. It also needs to be rewritten with BenchmarkJS I think; Matcha is super easy to use, but not very maintained or rigorous or browser-compatible. |
Perhaps now is a good time to compare to: http://thanpol.as/javascript/promises-a-performance-hits-you-should-be-aware-of/ Are we seeing the same relative numbers on our end? By all accounts this makes Q look slower than the competition by over an order of magnitude (which I find surprising in light of the changes in 0.9.7, but I re-ran the first test and the results still hold). |
That first blog post has major methodology issues that have been discussed at length. I would not be surprised if the third does as well, given that it's from someone who believes that synchronous resolution is The One True Way. |
Hi Domenic, That's fair. I think it would be useful to provide a concise summary of the aforementioned problems alongside our own benchmarks. I'm hoping that will get the ball rolling towards more objective benchmarks, as one vendor corrects the other's problems. I'm glad to see that Q is instituting its own benchmark as it'll push us in the right direction. You can't improve what you can't measure, right? :) |
Hi, I was wondering what the status is of this? We were debating what promise library to use at Spotify, and this one came up https://github.com/petkaantonov/bluebird/tree/master/benchmark, which made bluebird look (what I think is) unnecessarily good. |
@mpj I believe that @stefanpenner also has a benchmark he uses to tune RSVP. By that benchmark, last I checked, when.js was really good, but most of the libraries designed for speed did well. Q is designed at a higher level of abstraction and does more to protect private state than the rest of the pack, which comes at a significant performance cost. I have been re-evaluating that trade-off in the experimental v2 branch, but if speed is your primary concern, you’re willing to opt-out of the Q ecosystem, and you have no desire to use promises as proxies for remote objects, you should for sure go with one of the others. I have used @stefanpenner’s benchmark to evaluate Q. |
Inspired by @jdalton's talk at JSConf, and all the goings-on around our
nextTick
implementation, I think we need a performance suite that produces numbers we can compare from commit to commit, or release to release.There's been some work done in this area already for generic promise performance tests:
But I think we need something rather Q specific, with real-world-ish scenarios. We can add to it as we go along.
This would be nice because there's lots of minor inefficiencies in how we do things, but without a way of quantifying their impact it's not worth messing with the code.
The text was updated successfully, but these errors were encountered: