-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot use both read and readline method in same ZipExtFile object #51859
Comments
open a file in the zip file through ZipFile.open method, if invoke read I was trying to get a ZipExtFile and pass it to pickle.load(f), a The reason is readline will keep a linebuffer, but read only use |
I uploaded a possible patch for Python 2.7. The patch converts ZipExtFile into subclass of io.BufferedIOBase and drops most of the original implementation. However, the patch breaks current newline behavior of ZipExtFile. I figured this was reasonable because zipfile newline behavior: Reading zip content as text with newline functionality may be done with io.TextIOWrapper. A bonus of sub classing io.BufferedIOBase is significantly better read performance. |
I don't think we can remove the "U" option from 2.6/2.7; it was certainly introduced for a reason and isn't inconsistent with the "U" option to the built-in open(). On 3.x, behaviour is indeed inconsistent with the standard IO library, so maybe we can either wrap the object in a TextIOWrapper, or remove support for "U". |
Right, I was reading the 3.1 docs by mistake. I updated the patch. This time universal newlines are supported. On my dataset (75MB 650K lines log file) the readline() speedup is x40 for 'r' mode and x8 for 'rU' mode, and you can get an extra bump by using the io module wrappers. I added tests for interleaved use of readline() and read() (which add ~2 seconds to running time) |
Thank you. Are you sure the "Shortcut common case" in readline() is |
Also, I'm not sure what happens in readline() in universal mode when the |
On my dataset the shortcut speeds up readline() 400% on top of the default C implementation. I can take a look to why the C implementation is slow (although it is documented as "slowish").
The regular pattern returns either a line chunk or a newline (sequence) but not both. To read a line there is therefore a minimum of two peek() calls. One for the line content and the last for the newline. Since the peek is called with a value of 2, the newline sequence \r\n should be retrieved as is. There is no test for that. |
No, it doesn't follow. The \r can still appear at the end of a readahead, in which case your algorithm will not eliminate the following \n. That is, if the sequence of readaheads is ['a\r', '\nb\n'], readlines() will return ['a\n', '\n', 'b\n'] while it should return ['a\n', 'b\n']. It should be possible to construct a statistically valid test case for this, for example by creating an archived file containing 'a\r\n'*10000 and reading back from it. |
If the sequence of readaheads is ['a\r', '\nb\n'], the first use of the pattern will consume 'a', then the peek(2) will trigger a read() and the next use of the pattern will consume '\r\n'. I updated the patch and enhanced a little the inline comments to clear this up and added the suggested test. I also ventured into a rewrite of the read() function since it seems to contain various latent bugs / problems, probably the result of ages of slow refactoring. For example, decompressor flush() depends on self.eof flag which is never set and the decrypter is fed by the raw buffer which is in turn set by the decompresser, thus theoretically feeding decrypted data back to the decrypter. I also made it completely buffered so now small reads are significantly faster. If this patch is good, i'll prepare a patch for python 3.x and 2.6 |
Some comments:
|
I uploaded an update for Python 2.7.
sys.maxsize is 64 bit number on my system but the maximum value accepted by zlib's decompress() seems to be INT_MAX defined in pyport.h which equals the number I used.
Added
Changed (old habits die hard).
Yes, was moved to the constructor.
The documentation of io.BufferedIOBase.read() reads "multiple raw reads may be issued to satisfy the byte count". I understood this language to mean satisfying read size is optional. Isn't it? |
It's the reverse actually. It means that |
May be a good idea to clear this up in the documentation. http://en.wiktionary.org/wiki/may#Verb |
I do not find the existing phrasing in the IO docs ambiguous, but since it is obviously possible to misinterpret it it would be good to clarify it. Can you suggest an alternate phrasing that would be clearer? |
Replace 'may' with 'will' or 'shall' everywhere the context indicates a mandatory requirement. Since this possibly affects the entire Python documentation, does it make sense to discuss this on python-dev? |
Either that, or open a separate tracker issue. I'm not a documentation |
Uploaded an updated patch with read() which calls underlying stream enough times to satisfy required read size. |
The patch looks rather good. Is |
Right, removed MAX_N from read(); remains in read1(). |
The patch has been committed in r77798 (trunk) and r77800 (py3k). Thank you! |
Nir, could you also provide a test for the part that "handles unconsumed data" (line 601 in zipfile.py)? In r77809 (and r77810) I made a change to avoid using zlib when it's not necessary (zlib is not always available), and I was going to commit a typo that luckily I noticed in the diff. The tests however didn't notice anything because that part is untested. |
The related scenario is a system without zlib. How do you suggest simulating this in test? |
The test should just check that the part that handles unconsumed data works when zlib is available. AFAIU if zlib is not available this part (i.e. the content of the if) can be skipped so it doesn't need to be tested. (When zlib is not available 'zlib' is set to None, see the try/except at the beginning of zipfile.py) |
Unconsumed data is compressed data. If the part which handles unconsumed data does not work when zlib is available, then the existing tests would fail. In any case the unconsumed buffer is an implementation detail of zipfile. I see a point in adding a test to make sure zipfile behaves as expected when zlib is not available, but how? Also, on which systems is zlib missing? I don't see this mentioned in the zlib docs. |
If the existing tests end up testing that part of code too then it's probably fine. I tried to add a print inside the 'if' (at line 605) but it didn't print anything. However here some tests are skipped, so maybe that part of code is tested in some of these skipped tests.
Several tests are already skipped in test_zipfile.py when zlib is not available (all the ones with the skipUnless decorator)
Even if it's available and usually already installed in most of the systems, in some -- like mine (Linux) -- is not installed. If you can confirm that that part of code is already tested in some of the tests that here are skipped, I will close the issue. |
I actually meant how would you simulate zlib's absence on a system in which it is present? |
The easiest way is to setting zlib to None or not import it at all. |
Apparently that part of code is already tested in other tests that use deflated mode, so I'll close this again. Thanks for the info. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: