-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bz2 module appears slower in Python 3.x versus Python 2.x #60238
Comments
Hi, I was writing a script to parse BZ2 blogfiles under Python 2.6, and I noticed that bz2file (http://pypi.python.org/pypi/bz2file) seemed to perform much slower than with bz2 (native): http://stackoverflow.com/questions/12575930/is-python-bz2file-slower-than-bz2 I wrote a dummy script that basically just reads through the file, one for bz2 and one for bz2file (attached): [vichoo@dev_desktop_vm Desktop]$ time /opt/python3.3/bin/python3.3 testbz2.py > /dev/null real 0m5.170s real 0m5.245s real 0m0.500s real 0m5.801s I also executed "echo 3 > /proc/sys/vm/drop_cache" between each run. From this, it appears that Python 2.x's bz2 is fast, but bz2file is slow - and that Python 3.x's bz2 is slow. Obviously, there could be an issue with the methdology above - however, if not, do you know if there are any performance regressions in bz2 from Python 2.x to 3.x? Thanks, |
It looks as bz2 in Python 3.3 has bad buffering. Reading by larger chunks shows the same speed in 2.7 and 3.3. |
Well, I was able to restore performance (using same code as in zipfile). The patch will be later. |
Hi, I didn't have any buffering size set before, so I believe it defaults to 0 (no buffering), right? Wouldn't this be the behaviour on both 2.x and 3.x? I'm using a 1.5 Mb bzip2 file - I just tried setting buffering to 1000 and 1000000, and it didn't seem to make any noticeable difference to the speed of reading in the file. E.g.: f = bz2.BZ2File(filename, 'rb', buffering=1000000) What sort of values did you use in relation to your compressed file size to get the improvements? Cheers, |
Hi, Aha, whoops, sorry Serhiy, didn't see your second message - I think you and I posted our last messages at nearly the same time... Cool, looking forward to your patch =). Also, is there any chance you could provide a more detailed explanation of what's going on? This is just me being curious about how it all works under the hood... Cheers, |
It will take some time to make a completed patch. I don't have much time *right* now. Wait for a few hours.
When reading from the buffer bz2 does: result = buffer[:size]
buffer = buffer[size:] # copy a thousands bytes zipfile does: result = buffer[offset:offset+size]
offset += size # buffer untouched |
Here is a patch and benchmark script. This required more time than I thought. Benchmark results: Unpatched: 5.3 read(1) Patched: 0.73 read(1) |
Patch updated. Fixed one error. Now readline() optimized too. Benchmark results (reading python.bz2): Py2.7 Py3.2 Py3.3 Py3.3
1.7 1.7 11 0.67 readline() |
New changeset 1a08f4887cff by Nadeem Vawda in branch '3.3': New changeset cf50a352fe22 by Nadeem Vawda in branch 'default': |
Thanks for the bug report, Victor, and thank you Serhiy for the patch! Serhiy, would you be OK with me also including this patch in the bz2file package? |
Yes, of course. We can even speed up 1.5 times the reading of small chunks, if we inline _check_can_read() and _read_block(). The same approach is applied for LZMAFile. |
Awesome. I plan to do a new release for this in the next couple of days.
Interesting idea, but I don't think it's worthwhile. It looks like this is only a noticeable improvement if size is 10 or 1, and I don't think these are common cases (especially not for users who care about performance). Also, I'm reluctant to have two copies of the code for _read_block(); it makes the code harder to read, and increases the chance of introducing a bug when changing the code.
Of course. I'll apply these optimizations to LZMAFile next weekend. |
Recursive inline _check_can_read() will be enough. Now this check calls 4 Python functions (_check_can_read(), readable(), _check_non_closed(), closed). Recursive inlining only readable() in _check_can_read() is achieved significant but less (about 30%) effect. |
I've inlined readable() into _check_can_read() [3.3: 4258248a44c7 | default: abb5c5bde872]. This seems like a good balance between maximizing our performance in edge cases and not turning the code into a mess in the process ;) Once again, thanks for your contributions! |
In fact I have tried other code, a bit faster and more maintainable (see patch). |
Ah, nice - I didn't think of that optimization. Neater and faster. I've committed this patch [e6d872b61c57], along with a minor bugfix [7252f9f95fe6], and another optimization for readline()/readlines() [6d7bf512e0c3]. [merge with default: a19f47d380d2] If you're wondering why the Roundup Robot didn't update the issue automatically, it's because I made a typo in each of the commit messages. Apparently 16304 isn't the same as 16034. Who would have thought it? :P |
I've released v0.95 of bz2file, which incorporates all the optimizations discussed here. The performance should be similar to 2.x's bz2 in most cases. It is still a lot slower when calling read(10) or read(1), but I hope no-one is doing that anywhere where performance is important ;-) One other note: bz2file's readline() is faster when running on 3.x than on 2.x (and in some cases faster than the 2.x stdlib version). This is probably due to improvements made to io.BufferedIOBase.readline() since 2.7, but I haven't had a chance to investigate this. Let me know if you have any issues with the new release. |
New changeset cc02eca14526 by Nadeem Vawda in branch 'default': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: