-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test various compression algorithms for the bazel binary #6318
Comments
Not a perfect methodology, but I just took a pre-compiled bazel 0.20 for 64-bit Linux, run it and then done some tests on AMD 2950X, DDR4-3200 ECC, all input and output data in memory backed filesystems and caches. Linux 4.18.
Runtime is a wall clock time to decompress and untar into (memory backed) file system. zoepfli decompression not tested, because it will be similar to gzip, and decompression speed will be similar (~1.6s). 7z and few other compressor have a native support for multiple files, but these was not tested. Only tar + single file compression. Compression with brotli (note that -Z aka -q 11 is a default) takes ages! It takes about 11 minutes on my machine! Compression with zopfli (--i15 is default) also takes ages. It takes about 12 minutes for each of them on my machine. All the other compressors finish in few seconds, and below 1 minute for xz/7z/bz2 or zstd -9. 7z is also multithreaded. There are versions of multithreaded compressors for bzip2. dact is slow because it selects bzip2 internally for most of the blocks as giving best compression. In my opinion zstd provides excellent benefits, good compression ratio, very fast decompression and practical compression speeds. zstd -19 takes 72 seconds on my machine. I am using standard precompiled binaries for all tools available in Debian testing.
Plus: I can test repacking all jars into other formats, and then reassembling original content on extraction. |
Thanks for testing! Do you happen to have a script handy to do that? If so, please rerun on the a bazel built from HEAD (with minimal JDK, after commit a7f07cb). |
Scratch that. We broke the build and the commit was rolled back. I'll update the bug once this is in a testable state again. |
We reduced size at HEAD signifcantly (~70MB now) so we might want to rerun those numbers. |
We have no plans to replace the compression algo at this point, so I am closing this. |
They may affect binary size, decompression and extraction speed.
Things to try: brotli, zoepfli, zlib, gz
The text was updated successfully, but these errors were encountered: