New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parser performance boost #573
Conversation
Including loop optimizations: http://cl.ly/0Y2f2J3Y1E311t3Z3C1C |
@@ -141,14 +144,15 @@ exports.encodePayload = function (packets) { | |||
var regexp = /([^:]+):([0-9]+)?(\+)?:([^:]+)?:?([\s\S]*)?/; | |||
|
|||
exports.decodePacket = function (data) { | |||
var pieces = data.match(regexp); | |||
var pieces = data.match(regexp) | |||
, parse = JSON.parse; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely a micro-optimisation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw a small increase of decoding per sec because of it.
For all the cases where you have defaults that are |
I just wanted to play with vbench haha. we should find out if it's even remotely a bottleneck first, but the changes are not ugly or anything so it doesn't hurt |
@miksago changing the switch statement to a integer switch and changing the empty string both degraded performance. It costed about 1K ops/s. But thanks a lot for the tips! @visionmedia :p The protocol parser is a hot code path, so If we can optimize it, we should. For some applications these changes might not be noticeable because they are not processing a lot of message per sec, but for busy chat boxes or what ever it could certainly improve performance and response times. |
for sure, that's why I wanted to make sure there was no glaring issues with those first, but we should still profile the system as a whole |
Not only that but also run individual parts against the v8 --profile option to see if we can optimize against crankshaft. Just to be sure that we are not causing it to bail out. Because I'm sure we do that with the parser, because we wrapping JSON.parse in a try catch block. |
@3rd-Eden Aha! That'd be an explaination as to why memoizing JSON.parse speeds up, as it doesn't need to look up that value every time due to try {} catch (e){} breaking crankshaft optimisations. |
@3rd-Eden switch on integers is slower, but if/else on integers may be faster. I'm surprised caching '' is slower. |
niiiice |
Not really integers in javascript, but anyway. That switches are marginally faster than if/else is no surprise, but the fact that string switches are quicker than number switches -- if true -- surely must be a javascript artifact. |
I did a few simulations on pregenerated sequences of strings / numbers, with switch / ifelse / object-function-lookup for each possible value:
|
If this holds true, even without doing more testing with numeric switches, rewriting the switch cases in the code affected by this pull to objects with functions should yield even quicker runtimes. |
I think you may be doing those benchmarks in a slightly wrong way, you should be On 13 Oct 2011, at 09:26, Einar Otto Stangvik wrote:
|
I decided to port the current parser benchmark that TJ wrote in vbench to benchmark.js; https://gist.github.com/1283881 So we can actually do some testing on node 0.5.9 |
@miksago, Uhm and how do you figure that makes any difference at all? So long as the sequences are equally long, the data doesn't lie. |
@3rd-Eden Nice! I spent about 6 hours trying to get npm to install on 0.5.10-pre :/ |
@miksago, Well benchmarking differently can paint a more realistic picture in cases where the processing speed increases / decreases over time. Here, however, that's not the case at all. All we really want to see is roughly how much time a single call through if / switch branching takes, vs. e.g. object member lookup. |
@3rd-Eden nice! benchmark.js works for you? it failed miserably last time I tried it so I went with uubench for vbench but if that's working properly we should change vbench to use it |
@visionmedia benchmark.js is a too common name :D |
oh, lame. I thought they had that stuff running on node too, but last time i looked it was a huge mess |
@visionmedia I have used it couple of times before to benchmark some of my other modules and it worked quite nicely. I have no doubt about the quality of the code as it also powers jspref. There are some small issues in the code, if you want to run multiple suites. You need to manually clear the |
Doh, you guys type way faster than i do ;) |
Lemmi know if that's still an issue. The |
looked at it again, looks ok actually, I'd love to try it out for vbench |
https://gist.github.com/48bf9fbcdb6993605fee <= Did a quick and dirty test with object method lookup rather than switch. Seems to provide slightly better speeds. More tests will follow. |
Using arrays of functions + indexes rather than string types => way slower than all prior approaches. I'm still guessing this is due to javascript numbers being floating point rather than plain ints. |
The speed performance isn't coming from changing the switch to a object method lookup, but from moving the A gist to illustrate: https://gist.github.com/1285245 =p |
@jdalton Great to hear that, can't wait to see it land in npm :) |
http://cl.ly/402t2C133B2a1P0g1K0h <--- before http://cl.ly/363W080c2j261l3A1m3y <--- after ;o