-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
urllib.parse: Allow bytes in some APIs that use string literals internally #54082
Comments
As per python-dev discussion in June, many Py3k APIs currently gratuitously prevent the use of bytes and bytearray objects as arguments due to their use of string literals internally. Examples: While a strict reading of the relevant RFCs suggests that strings are the more appropriate type for these APIs, as a practical matter, protocol developers want to be able to operate on ASCII supersets as raw bytes. The proposal is to modify the implementation of these functions such that string literals are used only with string arguments, and bytes literals otherwise. If a function accepts multiple values and a mixture of strings and bytes is passed in then the operation will still fail (as it should). |
From the python-dev thread (http://mail.python.org/pipermail/python-dev/2010-September/103780.html):
Passing in byte sequences that are encoded using an ASCII incompatible |
The design approach (at least for urllib.parse) is to add separate *b APIs that operate on bytes rather than implicitly allowing bytes in the ordinary versions of the function. Allowing programmers to manipulate correctly encoded (and hence ASCII compatible) bytes to avoid decode/encode overhead when manipulating URLs is fine (and the whole point of adding the new functions). Allowing them to *not know* whether they have encoded data or text suitable for display to the user isn't necessary (and is easy to add later if we decide we want it, while taking it away is far more difficult). More detail at http://mail.python.org/pipermail/python-dev/2010-September/103828.html |
Attached patch is a very rough first cut at this. I've gone with the basic approach of simply assigning the literals to local variables in each function that uses them. My rationale for that is:
I've also gone with a philosophy that only str objects are treated as strings and everything else is implicitly assumed to be bytes. This is in keeping with the general interpreter behaviour where we *really* don't support duck-typing when it comes to strings. The test updates aren't comprehensive yet, but they were enough to uncover quite a few things I had missed. quoting is still a bit ugly, so I may still add a byte->bytes/str->str variant of those APIs. |
A possible duck-typing approach here would be to replace the "instance(x, str)" tests with "hasattr(x, 'encode')" checks instead. Thoughts? |
Looks more ugly than useful to me. People wanting to emulate str had better subclass it anyway... |
Agreed - I think there's a non-zero chance of triggering the str-path by mistake if we try to duck-type it (I just added a similar comment to bpo-9969 regarding a possible convenience API for tokenisation) |
Added to Reitveld: http://codereview.appspot.com/2318041/ |
One of Antoine's review comments made me realise I hadn't explicitly noted the "why not decode with latin-1?" rationale for the bytes handling. (It did come up in one or more of the myriad python-dev discussions on the topic, I just hadn't noted it here) The primary reason for supporting ASCII compatible bytes directly is specifically to avoid the encoding and decoding overhead associated with the translation to unicode text. Making that overhead implicit rather than avoiding it altogether would be to quite miss the point of the API change. |
I think it's quite misguided. latin1 encoding and decoding is blindingly |
Ah, I didn't know that (although it makes sense now I think about it). A general sketch of such a strategy would be to stick the following encode_result = not isinstance(url, str) # or whatever the main
parameter is called
if encode_result:
url = url.decode('latin-1')
# decode any other arguments that need it
# Select the bytes versions of any relevant globals
else:
# Select the str versions of any relevant globals Then, at the end, do an encoding step. However, the encoding step may |
I don't understand why you would like to implicitly convert bytes to str (which is one of the worse design choice of Python2). If you don't want to care about encodings, use bytes is fine. Decode bytes using an arbitrary encoding is the fastest way to mojibake. So You have two choices: create new functions with bytes as input and output (like os.getcwd() and os.getcwdb()), or the output type will depend on the input type (solution choosen by os.path). Example of ther later: >>> os.path.expanduser('~')
'/home/haypo'
>>> os.path.expanduser(b'~')
b'/home/haypo' |
From a function *user* perspective, the latter API (bytes->bytes, str->str) is exactly what I'm doing. Antoine's point is that there are two ways to achieve that: Option 1 (what my patch currently does):
Option 2 (the alternative Antoine suggested and I'm considering):
From outside the function, a user shouldn't be able to tell which approach we're using internally. The nice thing about option 2 is to make sure you're doing it correctly, you only need to check three kinds of location:
The effects of option 1 are scattered all over your algorithms, so it's hard to be sure you've caught everything. The downside of option 2 is if you make a mistake and let your bytes-as-pseudo-str objects escape from the confines of your function, you're going to see some very strange behaviour. |
In this case, you have to be very careful to not mix str and bytes decoded to str using a pseudo-encoding. Dummy example: urljoin('unicode', b'bytes') should raise an error. I don't care of the internals if you write tests to ensure that it is not possible to mix str and bytes with the public API. |
Yeah, the general implementation concept I'm thinking of going with for option 2 will use a few helper functions: url, coerced_to_str = _coerce_to_str(url) The first helper function standardises the typecheck, the second one complains if it is given something that is already a string. The last one just standardises the check to see if the coercion needs to be undone, and actually undoing the coercion. |
As per RDM's email to python-dev, a better way to create the pseudo_str values would be by decoding as ascii with a surrogate escape error handler rather than by decoding as latin-1. |
If you were worried about performance, then surrogateescape is certainly |
Yeah, I'll have to time it to see how much difference latin-1 vs surrogateescape makes when the MSB is set in any bytes. |
If you were really worried about performance, the bytes type is maybe faster |
On Tue, Oct 5, 2010 at 5:32 PM, STINNER Victor <report@bugs.python.org> wrote:
I'm fairly resigned to the fact that I'm going to need some kind of The first step is to actually have a str-based patch to compare to the |
I wonder if Option2 (ascii+surrogateescape vs latin1) is only about |
I've been pondering the idea of adopting a more conservative approach here, since there are actually two issues:
I'm wondering, since encoding (aside from quoting) isn't urllib.parse's problem, maybe what I should be looking at doing is just handling bytes input via an implicit ascii conversion in strict mode (and then conversion back when the processing is complete). Then bytes manipulation of properly quoted URLs will "just work", while improperly quoted URLs will fail noisily. This isn't like email or http where the protocol contains encoding information that the library should be trying to interpret - we're just being given raw bytes without any context information. If any application wants to be more permissive than that, it can do its own conversion to a string and then use the text-based processing. I'll add "encode" methods to the result objects to make it easy to convert their contents from str to bytes and vice-versa. I'll factor out the implicit encoding/decoding such that if we decide to change the model later (ASCII-strict, ASCII-escape, latin-1) it shouldn't be too difficult. |
Attached a second version of the patch. Notable features:
The actual coercion-to-str technique I used standardises the type consistency check for the attributes and also returns a callable that handles the necessary coercion of any results. The parsed/split result objects gain encode/decode methods to allow that all to be handled polymorphically (although I think the decode methods may actually be redundant in practice). There's a deliberate loophole in the type consistency checking to allow the empty string to be type-neutral. Without that, the scheme='' default argument to urlsplit and urlparse becomes painful (as do the urljoin shortcuts for base or url being the empty string). Implementing this was night and day when compared to the initial attempt that tried to actually manipulate bytes input as bytes. With that patch, it took me multiple runs of the test suite to get everything working. This time, the only things I had to fix were typos and bugs in the additional test suite enhancements. The actual code logic for the type coercions worked first time. |
Unless I hear some reasonable objections within the next week or so, I'm going to document and commit the ascii-strict coercion approach for beta 1. The difference in code clarity is such that I'm not even going to try to benchmark the two approaches against each other. |
Just a note for myself when I next update the patch: the 2-tuple returned by defrag needs to be turned into a real result type of its own, and the decode/encode methods on result objects should be tested explicitly. |
Related issue in msg120647. |
urlunparse(url or params = bytes object) produces a result urllib.parse.urlunparse(['http', 'host', '/dir', b'params', '', ''])
--> "http://host/dir;b'params'" That's confusing since urllib/parse.py goes to a lot of trouble to Index: Lib/urllib/parse.py
@@ -219,5 +219,5 @@ def urlunparse(components):
scheme, netloc, url, params, query, fragment = components
if params:
- url = "%s;%s" % (url, params)
+ url = ';'.join((url, params))
return urlunsplit((scheme, netloc, url, query, fragment))
Some people at comp.lang.python tell me code shouldn't anyway do str()
just in case it is needed like urllib does, not that I can make much
sense of that discussion. (Subject: harmful str(bytes)). BTW, the str vs bytes code doesn't have to be quite as painful as in |
New patch which addresses my last two comments (i.e. some basic explicit tests of the encode/decode methods on result objects, and urldefrag returns a named tuple like urlsplit and urlparse already did). A natural consequence of this patch is that mixed arguments (as in the message above) will be rejected with TypeError. Once I figure out what the docs changes are going to look like, I'll wrap this all up and commit it. |
Committed in r86889 The docs changes should soon be live at: If anyone would like to suggest changes to the wording of the docs for post beta1, or finds additional corner cases that the new bytes handling can't cope with, feel free to create a new issue. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: