New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
btlazy2 strategy is incredibly slow on highly repetitive data #100
Comments
Yes, I suspect repeating pattern to have devastating effects on btlazy2 strategy. Not sure if there is a simple, non-detrimental, solution to it. That being said, I'm opened to any suggestion / patch. |
Sometimes JSON, XML/HTML/SVG, & simple array data (like some NoSQL blobs that are templated, maps in games, etc) can be repetitive. Only suggestion: Have a separate mode that is better at repeated patterns. |
Couldn't repetitions in JSON/XML be addressed by good dictionary support that would take care of that, leaving only the core content to be handled by a more suitable algorithm? |
I originally noticed this trying to compress Windows\Logs\CBS\CBS.log, so it does occur with some real-world data. It makes the high compression levels DOSable, which seems like a cause for concern. It's tricky for clients that need to avoid that to do so right now because the maximum non-btlazy2 compression level depends on the input size hint and the library version. Is the problem that the hash chains grow linearly so the total search time is quadratic? It's not obvious to me how to avoid that, but other libraries (such as LZMA SDK) do somehow avoid it in their maximum compression modes with large windows. |
The problem is limited to btlazy2, as it uses a binary tree. Let's keep this issue opened. A solution will be needed to handle such case gracefully, without impacting too much the more general situation where repetitive data is either absent or in limited proportion. |
There's a new update into "dev" branch (https://github.com/Cyan4973/zstd/tree/dev) In my tests, it dramatically improves speed in presence of repetitive segments of any period. The cost for it is pretty small : normal data tend to compress slightly less, but I expect this difference to be negligible in most circumstances. The good news is that speed is not worsened for normal data. For your testings. This is still experimental stuff. |
Merged into master |
For example, on a file containing 10,000 repetitions of "All work and no play makes Jack a dull boy.\n" (440,000 bytes total), zstd -b15 gives about 23 MB/s on my laptop while zstd -b16 and higher give about 0.02 MB/s. I had to add another digit to the speed output to see anything but 0.0. I assume the switch to the btlazy2 strategy is what makes the difference.
The text was updated successfully, but these errors were encountered: