New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rather modest chunk size in gzip.GzipFile #65161
Comments
I've had the opportunity to use the seek() method of the gzip.GzipFile class for the first time in the past few days. Wondering why it seemed my processing times were so slow, I took a look at the code for seek() and read(). It seems like the chunk size for reading (1024 bytes) is rather small. I created a simple subclass that overrode just seek() and read(), then defined a CHUNK_SIZE to be 16 * 8192 bytes (the whole idea of compressing files is that they get large, right? seems like most of the time we will want to seek pretty far through the file). Over a small subset of my inputs, I measured about a 2x decrease in run times, from about 54s to 26s. I ran using both gzip.GzipFile and my subclass several times, measuring the last four runs (two using the stdlib implementation, two using my subclass). I measured both the total time of the run, the time to process each input records, and time to execute just the seek() call for each record. The bulk of the per-record time was in the call to seek(), so by reducing that time, I sped up my run-times significantly. I'm still using 2.7, but other than the usual 2.x->3.x changes, the code looks pretty much the same between 2.7 and (at least) 3.3, and the logic involving the read size doesn't seem to have changed at all. I'll try to produce a patch if I have a few minutes, but in the meantime, I've attached my modified GzipFile class (produced against 2.7). |
You should try with different chunk and file sizes and see what is the best compromise. Tagging as "easy" in case someone wants to put together a small script to benchmark this (maybe it could even be added to http://hg.python.org/benchmarks/), or even a patch. |
Here's a straightforward patch. I didn't want to change the public API of the module, so just defined the chunk size with a leading underscore. Gzip tests continue to pass. |
I played around with different file and chunk sizes using attached benchmark script. After several test runs I think 1024 * 16 would be the biggest win without losing too many μs on small seeks. You can find my benchmark output here: https://gist.github.com/tiwilliam/11273483 My test data was generated with following commands: dd if=/dev/random of=10K bs=1024 count=10 |
William, thanks for the benchmarks. Unfortunately this type of benchmark depends on the hardware (disk, SSD, emmoey bandwitdh, etc). So I'd suggest, instead of using an hardcoded value, to simply reuse io.DEFAULT_BUFFER_SIZE. |
That makes sense. I proceeded and updated
Ps. This is my first patch, tell me if I'm missing something. |
I don't think io.DEFAULT_BUFFER_SIZE makes much sense as a heuristic for the gzip module (or compressed files in general). Perhaps gzip should get its own DEFAULT_BUFFER_SIZE? |
Do you mean from a namespace point of vue, or from a performance point of view? Sure, it might not be optimal for compressed files, but I gues that |
Well, I think that compressed files in general would benefit from a |
On Mon, Apr 28, 2014 at 1:59 PM, Antoine Pitrou <report@bugs.python.org> wrote:
I agree. When writing my patch, my (perhaps specious) thinking went like this.
So, if I have a filesystem block size of 8192 bytes, while that would Skip |
That could make sense, dunno. Note that the bz2 module uses a harcoded 8K value. Note that the buffer size should probably be passed to the open() call. Also, the allocation is quite peculiar: it uses an exponential buffer 202 # Starts small, scales exponentially In short, I think the overall buffering should be rewritten :-) |
On Mon, Apr 28, 2014 at 3:08 PM, Charles-François Natali
Perhaps so, but I think we should open a separate ticket for that S |
Agreed. The patch looks good to me, so feel free to commit! |
See also the patch for bpo-23529, which changes over to using BufferedReader for GzipFile, BZ2File and LZMAFile. The current patch there also passes a buffer_size parameter through to BufferedReader, although it currently defaults to io.DEFAULT_BUFFER_SIZE. |
Martin, do you think this is still an issue or has it been fixed by the compression refactor? |
The gzip (as well as LZMA and bzip) modules should now use buffer and chunk sizes of 8 KiB (= io.DEFAULT_BUFFER_SIZE) for most read() and seek() type operations. I have a patch that adds a buffer_size parameter to the three compression modules if anyone is interested. It may need a bit work, e.g. adding the parameter to open(), mimicking the built-in open() function when buffer_size=0, etc. I did a quick test of seeking 100 MB into a gzip file, using the original Python 3.4.3 module, the current code that uses 8 KiB chunk sizes, and then my patched code with various chunk sizes. It looks like 8 KiB is significantly better than the previous code. My tests are peaking at about 64 KiB, but I guess that depends on the computer (cache etc). Anyway, 8 KiB seems like a good compromise without hogging all the fast memory cache or whatever, so I suggest we close this bug. Command line for timing looked like: python -m timeit -s 'import gzip' \ Python 3.4.3: 10 loops, best of 3: 2.36 sec per loop |
Since there doesn’t seem to be much interest here any more, and the current code has changed and now uses 8 KiB buffering, I am closing this. Although in theory a buffer or chunk size paramter could still be added to the new code if there was a need. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: