Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upBincode is 2x-4x slower than serde-bench format #123
Closed
Comments
|
I would expect that endianness and the current non-trait-based size limit is responsible for most of this. |
|
Both are using big-endian. I think it's all size limit. |
Merged
|
I made a benchmarking tool here: https://github.com/TyOverby/bincode-bench I played around with a few optimizations, and the #127 PR is twice as fast as serde-bench due to the "size-find and then pre-alloc" optimization. Somehow this is actually faster than having an already-pre-allocated array.
Decoding is practically the same. |
|
@dtolnay: could you run your benchmark suite to confirm? |
|
I get similar results. Nice work. |
|
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Check out
cargo benchin https://github.com/serde-rs/bench. On my computer:The format is byte-for-byte identical (
cargo testto confirm) so there should be no reason that Bincode would take 75% longer to deserialize and 310% longer to serialize in the SizeLimit::Infinite / &[u8] case.In addition to all the tickets I filed today, see if you can identify other optimizations that would close this gap.
cc @maciejhirsz