Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why bytes.push instead of Buffer #12

Open
manast opened this issue Aug 7, 2017 · 12 comments
Open

Why bytes.push instead of Buffer #12

manast opened this issue Aug 7, 2017 · 12 comments
Labels

Comments

@manast
Copy link

manast commented Aug 7, 2017

I wonder why the encoder is using a standard js array and calls to push. Wouldn't be much faster to work using a Buffer?
I understand the challenges of growing the buffer, but it should be much faster considering the overhead of a function call for every element written to the array.

@darrachequesne
Copy link
Owner

How would you know the size of the buffer to use beforehand? Currently, the object to encode is passed through to get the needed size, and then the content of the array (bytes) is written to the buffer.

@manast
Copy link
Author

manast commented Aug 8, 2017

I would create a temporary buffer, maybe 8Kbs, and then grow a destination buffer (re-alloc & copy). For tiny objects it should be much faster than now. For larger ones depends on the overhead of growing the array, but V8 does something similar for arrays anyways so probably it will still be faster.

@darrachequesne
Copy link
Owner

That's interesting! Would you have time to implement it, so we can benchmark it against the current implementation?

@manast
Copy link
Author

manast commented Aug 8, 2017

I will not really have time in the short term. I saw that this project uses the buffer approach, and it is very fast according to their benchmarks: https://github.com/phretaddin/schemapack/blob/master/schemapack.js

@darrachequesne
Copy link
Owner

I'll keep this open then, if anyone wants to submit a pull request.

@baadc0de
Copy link

There is a package that could be useful for that: https://github.com/rvagg/bl

@davalapar
Copy link

i tried the following changes:

  • preallocating a large buffer once, then overwriting the values during encoding time and just copying from it once the whole object has been encoded.
  • changed hex values to integer values
  • reduce recurring function calls

tiny/small/medium test cases got improvements but on large, notepack still beats it. i am also wondering why i got different ops/sec on notepack on tiny, i was using node 10 on windows 7 x64.

https://github.com/davalapar/what-the-pack

@darrachequesne
Copy link
Owner

@davalapar impressive, great job! I think that it will be difficult to beat those numbers...

@manast
Copy link
Author

manast commented Dec 17, 2018

@davalapar I am wondering, any insights on why decode did not get any improvement?
Also, do you have benchmarks compared to JSON. stringify/parse? JSON is currently the fastest serializer for node, so it would be great if we could see how far we are from beating it.

@darrachequesne
Copy link
Owner

@davalapar it seems replacing new Buffer() by Buffer.allocUnsafe() (9224f74) (which uses an internal Buffer pool, ref) does improve the throughput! With the new version 2.2.0 I get the following results on my machine:

MessagePack: Setting buffer limit to 4.19 MB
what-the-pack encode tiny x 1,607,116 ops/sec ±1.56% (86 runs sampled)
notepack.encode tiny x 1,711,809 ops/sec ±2.78% (80 runs sampled)
what-the-pack encode small x 421,331 ops/sec ±2.72% (85 runs sampled)
notepack encode small x 373,340 ops/sec ±1.65% (85 runs sampled)
what-the-pack encode medium x 229,224 ops/sec ±2.80% (85 runs sampled)
notepack encode medium x 196,641 ops/sec ±1.57% (81 runs sampled)
what-the-pack encode large x 338 ops/sec ±0.97% (88 runs sampled)
notepack encode large x 310 ops/sec ±2.73% (79 runs sampled)

Way to go, but that's definitely better!

@davalapar
Copy link

@darrachequesne goddamn that's fricken awesome!

@manast

I am wondering, any insights on why decode did not get any improvement?

I think it's due to similar decoding implementation with notepack, which is same way of reading the buffer values and creating of objects and arrays. I had the crazy idea of inlining the equivalent of function calls such as readUInt16BE, readUInt16BE, thinking that it might speed up decoding since the ones here and here have checks that can be skipped, but I still haven't tried yet (maybe the numbers could check out, maybe not lol).

Also, do you have benchmarks compared to JSON. stringify/parse? JSON is currently the fastest serializer for node, so it would be great if we could see how far we are from beating it.

Results below, I'm on Windows 7 x64 with some open apps


JSON stringify tiny x 1,364,321 ops/sec ±0.91% (90 runs sampled) <<
what-the-pack encode tiny x 1,203,150 ops/sec ±2.65% (84 runs sampled)
notepack.encode tiny x 237,295 ops/sec ±50.03% (83 runs sampled)

JSON stringify small x 219,199 ops/sec ±0.34% (92 runs sampled)
what-the-pack encode small x 352,946 ops/sec ±0.58% (92 runs sampled) <<
notepack encode small x 239,186 ops/sec ±0.27% (91 runs sampled)

JSON stringify medium x 110,615 ops/sec ±1.82% (90 runs sampled)
what-the-pack encode medium x 168,408 ops/sec ±1.35% (86 runs sampled) <<
notepack encode medium x 116,409 ops/sec ±0.26% (95 runs sampled)

JSON stringify large x 24.22 ops/sec ±0.52% (44 runs sampled)
what-the-pack encode large x 204 ops/sec ±2.23% (77 runs sampled)
notepack encode large x 226 ops/sec ±1.28% (80 runs sampled) <<

JSON parse tiny x 1,336,891 ops/sec ±0.40% (91 runs sampled) <<
what-the-pack decode tiny x 1,132,375 ops/sec ±0.56% (88 runs sampled)
notepack decode tiny x 1,174,634 ops/sec ±0.26% (94 runs sampled)

JSON parse small x 267,013 ops/sec ±0.22% (96 runs sampled) <<
what-the-pack decode small x 249,414 ops/sec ±0.96% (93 runs sampled)
notepack decode small x 252,664 ops/sec ±1.28% (93 runs sampled)

JSON parse medium x 134,864 ops/sec ±0.21% (95 runs sampled)
what-the-pack decode medium x 143,053 ops/sec ±0.19% (94 runs sampled)
notepack decode medium x 148,638 ops/sec ±0.91% (90 runs sampled) <<

JSON parse large x 31.81 ops/sec ±0.56% (54 runs sampled)
what-the-pack decode large x 215 ops/sec ±0.24% (88 runs sampled) <<
notepack decode large x 214 ops/sec ±0.57% (82 runs sampled)


@manast
Copy link
Author

manast commented Dec 27, 2018

@davalapar I think these are really interesting results. In some cases even faster than JSON, very encouraging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants