-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inflate: not faster than miniz_oxide
on exr benchmark
#46
Comments
miniz_oxide
on exr benchmark
Improved by #41 |
Half the time here is spent in various parts of RLE decoding, so closing in favor of #41 |
Looking into it |
If you profile the benchmark with Samply (either on Linux or on Mac) you can double-click the items in the flame graph to view the source code, and where the time is spent within the function line by line. I've found it helpful to understand where time is spent. This only works while Samply is running, this won't work in profiles shared on the web. That's how I got this image. |
I didn't know about samply. Thanks for that, was wondering how you actually got those values |
I've updated the article about bounds checks with instructions for using Samply on Linux. |
The Cargo.toml file is locked in to 0.2.1. Edit it to point to 0.2.2 and you'll see a performance regression, to the point that
miniz_oxide
used on themaster
branch becomes slightly faster thanzune-inflate
.This is not reflected in the zune-inflate benchmark, so apparently it represents significantly different data patterns.
Edit: actually 0.2.1 doesn't pass tests, you have to upgrade to 0.2.2 for correctness.
The text was updated successfully, but these errors were encountered: