Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
A wrapper to protect from decompression bombs #2339
I tested inputs against zlib and gzip decompression bombs. The best you can probably achieve is 1:1000, and most likely 65kB -> 65MB decompressed. In my testing a steady 150 kB/s stream of messages maxed out one i7 core due the decompression, and heavily increased memory allocations and GC activity. Probably you might be able to also cause OutOfMemoryError with surprisingly small amount of simultaneously sent data, unless the amount of available memory was very high.
I wrote a simple wrapper for decompression tools. Since at least 65 kB of compressed data can reach that point, and the implementation issues are contained at Tools.decompress*, compression related issues should probably be handled there. The implementation bails at this moment out at 2 MB which is something that the next components in line will fail to handle anyways.
On ordinary message sizes there is no performance hit. On larger message sizes I got 40x+ performance boost when the decompression size limit started preventing the function from working pointlessly.
The implementation is simple, because there does not seem to be other uses for Tools.decompress* at this moment, and it did not seem useful to go for something more refined.
The tests are based on test files. I tried using DeflaterOutputStream, and GZIPOutputStream on the fly, but it seemed to take ~50 seconds to run on my laptop: