-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blacklist base64, hex, ... codecs from bytes.decode() and str.encode() #63818
Comments
I propose to add new input_type and output_type to CodecInfo. These attributes would only be defined for base64, hex, ... codecs which are not the classic encode: str=>bytes, decode: bytes=>str codecs. I also propose to modify str.encode() and bytes.encode() to only accept codecs using the right types. If the type doesn't match, the codec raises a LookupError. This issue should avoid the denial of service attack when a compression codec is used, see: |
The full input/output type specifications can't be implemented sensibly without also defining at least a ByteSequence ABC. While I think it's a good idea in the long run, there's no feasible way to design such a system in the time remaining before the Python 3.4 feature freeze. However, we could do something much simpler as a blacklist API: def is_unicode_codec(name):
"""Returns true if this is the name of a known Unicode text encoding"""
def set_as_non_unicode(name):
"""Indicates that the named codec is not a Unicode codec""" And then the codecs module would just maintain a set internally of all the names explicitly flagged as non-unicode. Such an API remains useful even if the input/output type support is added in Python 3.5 (since "codecs.is_unicode_codec(name)" is a bit simpler thing to explain than the exact type restrictions). Alternatively, implementing just the "encodes_to" and "decodes_to" attributes would be enough for str.encode, bytes.decode and bytearray.decode to reject known bad encodings early, leaving the input type checks to the codecs for now (since it is correctly defining "encode_from" and "decode_from" for many stdlib codecs that would need the ByteSequence ABC). |
I think internal blacklist for all standard non-unicode codecs will be enough to prevent the denial of service attack in maintenance releases. |
On 16.11.2013 10:16, Nick Coghlan wrote:
That doesn't look flexible enough to cover the various different
The original idea we discussed some time ago was to add a mapping The codecs module could then make this information available through codecs.types_supported(encoding, input_type, output_type) -> boolean.
Usage: if not codecs.types_support(encoding, str, bytes):
# not a Unicode -> 8-bit codec
... |
Note that users can completely blacklist any codec that hasn't been imported yet by preventing imports of that codec definition: >>> import sys, encodings
>>> blocked_codecs = "bz2_codec", "zlib_codec"
>>> for name in blocked_codecs:
... sys.modules["encodings." + name] = None
... setattr(encodings, name, None)
...
>>> b"payload".decode("bz2_codec")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
LookupError: unknown encoding: bz2_codec
>>> b"payload".decode("zlib_codec")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
LookupError: unknown encoding: zlib_codec Add in an "encodings._cache.clear()" and you can also block the use of previously used codecs. Regardless of what else we do, we should document this so that users know how to do it. This means the case we're handling in this issue is just the one where we want to block a codec from the builtin method APIs, while still allowing it in the codecs module APIs. |
Now that I understand Victor's proposal better, I actually agree with it, I just think the attribute names need to be "encodes_to" and "decodes_to". With Victor's proposal, *input* validity checks (including type checks) would remain the responsibility of the codec itself. What the new attributes would enable is *output* type checks *without having to perform the encoding or decoding operation first*. codecs will be free to leave these as None to retain the current behaviour of "try it and see". The specific field names "input_type" and "output_type" aren't accurate, since the acceptable input types for encoding or decoding are likely to be more permissive than the specific output type for the other operation. Most of the binary codecs, for example, accept any bytes-like object as input, but produce bytes objects as output for both encoding and decoding. For Unicode encodings, encoding is strictly str->bytes, but decoding is generally the more permissive bytes-like object -> str. I would still suggest providing the following helper function in the codecs module (the name has changed from my earlier suggestion and I now suggest implementing it in terms of Victor's suggestion with more appropriate field names): def is_text_encoding(name):
"""Returns true if the named encoding is a Unicode text encoding"""
info = codecs.lookup(name)
return info.encodes_to is bytes and info.decodes_to is str This approach covers all the current stdlib codecs:
This approach also makes it possible for a type inference engine (like mypy) to potentially analyse codec use, and could be expanded in 3.5 to offer type checked binary and text transform APIs that filtered codecs appropriately according to their output types. |
On 16.11.2013 13:44, Nick Coghlan wrote:
Nick, you are missing an important point: codecs can have any For this reason the simplistic approach with just one type |
Such codecs can be represented (for 3.4) by simply not setting the attribute and leaving the output types unspecified. We don't need that complexity for the standard library, and the "not specified" escape hatch means complex codecs will be no worse off than they are now. The elegance of Victor's proposal is that it doesn't lock us out of solving the more complex cases later (where the codec's output type depends on the input type) by permitting a tuple or dict mapping input types to output types for "encodes_to" and "decodes_to", while still solving all of our immediate problems. This is especially relevant since we can't effectively represent codec input types until we have a ByteSequence ABC to cover the "bytes-like object" case, so demanding that the general case be handled immediately is the same as requesting that the feature be postponed completely to Python 3.5. |
On 16.11.2013 14:37, Nick Coghlan wrote:
I don't agree. The mapping API is not much more complex than Rather than adding the ABC now, you could simply add all BTW: I don't see a need to rush any of this. If more discussion |
The only reasonable way to accurately represent "anything that exposes a buffer memoryview can read" as a type check is to write an appropriately duck-typed ABC. You can't enumerate all the types that the binary codecs accept as input, because that list of types isn't finite (unlike the output types, which are far more tightly constrained). I'd also be fine with Serhiy's suggestion of a private "non Unicode codec" set that is maintained by hand and checked *before* the codec operations in the codec methods - that then just becomes an internal implementation detail to improve the efficiency of the output type checks where we have the additional info needed to save the interpreter some work. |
On 16.11.2013 15:52, Nick Coghlan wrote:
Theoretically, yes. However, in practice, you'd only be interested
For 3.4 that would also do fine :-) |
Given the time frame, how about we just go with Serhiy's suggestion of a "known non-Unicode codec" internal blacklist for both 3.3 and 3.4? I still like the idea of exposing codec type maps for introspection, but designing a decent API for that which also handles type preserving codecs is going to take some work, and can't realistically be included in 3.4. By my count, if we delay the blacklisting until after we do the codec lookup, there's only seven names we need to block: >>> from codecs import lookup
>>> blacklist = "base64 uu quopri hex bz2 zlib rot13".split()
>>> for name in blacklist:
... print(lookup(name).name)
...
base64
uu
quopri
hex
bz2
zlib
rot-13 |
On 19.11.2013 12:38, Nick Coghlan wrote:
+1 |
Attached is a proof of concept for the blacklist approach (for 3.4, but without the fixes needed for the transform codec handling tests in test_codecs) This does have the potential to add a reasonable amount of additional overhead to encoding and decoding for shortstrings. Since it isn't obvious where to store a set for faster checking against the blacklist, it may be worth benchmarking this naive approach before doing something more complicated. Regardless, I don't plan to take this further myself any time soon - I just wanted to give it a firm nudge in the direction of the blacklist approach by providing a proof of concept. |
+ /* A set would be faster, but when to build it, where to store it? */ This is slow and not future proof. It would be faster and simpler to have two registries: a register only for bytes.decode()/str.encode() and another for "custom codecs" for codecs.encode/decode (or (bytes|str).transform()/untransform()). So "abc".encode("rot13") would simply fail with a LookupError. |
Future proofing is irrelevant at this point - this is just about what There is no "codec registry" - there is only the default codec search It sounds to me like you still believe it is possible to stick the |
Blacklisting by name is slow and it prevents a user from defining a codec with blacklisted name. What if just add private attribute ("_not_text"?) to unsafe codecs? If a codec has this attribute, than it should not be used it text encoding/decoding. Checking an attribute is much faster than comparing with a couple of strings. Another possibility is an inheriting all unsafe codecs from special class. |
Yes, a private attribute on CodecInfo is probably better - the rest of the |
New patch for 3.4 that uses a private attribute on CodecInfo and a private class method to set it appropriately (as I believe that is a better approach than changing the signature of CodecInfo.__init__ at this point, especially if we end up pursuing the codec type map idea in 3.5) This version also updates the tests to check for the appropriate error messages. The integration into the text model related methods is that same as in the proof of concept: a parallel private text-encoding-only C API that is used in preference to the general purpose codec machinery where appropriate. If there aren't any objections to this approach, I'll commit this one tomorrow. |
Why return codecs.CodecInfo._declare_transform(
name='base64',
encode=base64_encode,
decode=base64_decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter,
streamreader=StreamReader,
) instead of codec = codecs.CodecInfo(
name='base64',
encode=base64_encode,
decode=base64_decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter,
streamreader=StreamReader,
)
codec._is_text_encoding = False
return codec ? I have added other minor comments on Rietveld. |
I used the private class method to minimise the per-codec impact (1 Your other suggestions look good, so I'll apply those before committing. |
I also prefer the private attribute option. |
Victor pointed out this should now raise LookupError rather than TypeError. However, I'm not going to duplicate the manipulation of the private |
interp->codec_search_cache can be seen as the "registry". If you store codecs in two different registries depending a property, attribute, whatever; you keep O(1) complexity (bo extra strcmp or getting an attribute at each lookup). The overhead is only when you load a codec for the first time. It should not be so hard to add a second dictionary. You don't need to touch all parts of the codecs machinery, only interp->codec_search_cache. It would not be possible to have the name in the two registries. So codecs.lookup() would still return any kind of codecs, it would just lookup in two dictionaries instead of one. So codecs.encode/decode would be unchanged too (if you want to keep these functions ;-)). Only bytes.decode/str.encode would be modified to only lookup in the "text codecs" only registry. Yet another option: add a new dictionary, but leave interp->codec_search_cache unchanged. Text codecs would also be registered twice: once in interp->codec_search_cache, once in the second dictionary. So bytes.decode/str.encode would only lookup in the text codecs dictionary, instead of interp->codec_search_cache. That's all ;-)
If you accept to raise a LookupError, the "two registries" option may become more obvious, isn't it? |
Victor, you can propose whatever you like for 3.5, but I'm not adding new |
I'll be committing this version after a final local run of "make test" and a refleak check on test_codecs, test_charmapcodec and test_unicode (the latter two are the ones that found the backwards compatibility issue with the attribute lookup). |
Nick: I've had a look at your second patch. A couple of notes:
Regarding Victor's suggestion to use a separate registry dict The black listing is a very special case only used for the |
With blacklisted transform codecs, I'm fine with the idea of restoring codecs aliases for transform codecs in Python 3.4. Go ahead Nick. -- For Python 3.5, a better solution should be found to declare transform codecs. And I had like to also add transform()/untransform() methods on bytes and str types. So you would have 4 API:
But only few developers (only me?) are interested by transform/untransform, so codecs.encode/codecs.decode might be enough. |
Yay, thanks Victor! Regarding UTF-8 et al, the existing shortcuts in unicodeobject.c already |
There are still platforms using locale encodings different than ascii, I proposed to remove the new code for exception handling to simplify |
Just noting the exact list of codecs that currently bypass the full codec machinery and go direct to the C implementation by normalising the codec name (which includes forcing to lowercase) and then using strcmp to check against a specific set of known encodings. In PyUnicode_Decode (and hence bytes.decode and bytearray.decode): utf-8 In PyUnicode_AsEncodedString (and hence str.encode), the list is mostly the same, but utf-16 and utf-32 are not accelerated (i.e. they're currently still looked up through the codec machinery). It may be worth opening a separate issue to restore the consistency between the lists by adding utf-16 and utf-32 to the fast path for encoding as well. As far as the wrapping mechanism from issue bpo-17828 itself goes:
|
The examples from the 3.4 What's New may make it clearer why the exception wrapping in the codec machinery is much broader in scope that just detecting output type errors (even though handling that case was the original motivation for the idea):
>>> codecs.decode(b"abcdefgh", "hex")
binascii.Error: Non-hexadecimal digit found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
binascii.Error: decoding with 'hex' codec failed (Error: Non-hexadecimal digit found)
>>> codecs.encode("hello", "bz2")
TypeError: 'str' does not support the buffer interface
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: encoding with 'bz2' codec failed (TypeError: 'str' does not support the buffer interface) |
Unassigning this one - I don't think the solution we used for 3.4 is appropriate in a maintenance release, but I'm not sure how else to resolve it. |
What about my comments in msg203841? See http://comments.gmane.org/gmane.comp.python.devel/143943 for example how hard to get rid of private arguments in public functions. |
I was planning to fix pydoc to not show private keyword only arguments. |
If people don't pay attention on explicit warning not to use certain |
New changeset 0e10367c88ce by Zachary Ware in branch 'default': |
New changeset f3ec00d2b75e by Nick Coghlan in branch 'default': |
Is it still something to do? The initial issue looks to be fixed. You may open new issue if you see more things to do? |
It isn't fixed in 3.3 yet. |
As Serhiy noted, 3.3 is still affected. On the other hand, the approach I |
I'm not sure that the issue should be fixed in 3.3 because the patch is quite large. Do you consider that the bug is important enough? |
Is a bug which allows easily make DDOS attacks important enough? Every Python HTTP server, mail or news client is affected. |
I don't think that adding underscored parameter to public API is best solution, but we need the fix for 3.3. So here is a patch for backporting d68df99d7a57 to 3.3. |
Sorry, I missed the patch. |
That looks ok to me. |
Backporting just the encode/decode changes sounds reasonable to me, since open() isn't likely to be a DoS vector in this particular case. |
New changeset 95386bbf9471 by Serhiy Storchaka in branch '3.3': |
I think that bpo-20404 should be backported too. It is common to wrap socket stream with io.TextIOWrapper, and socket.makefile() returns text stream by default. |
New changeset 9975f827eefd by Serhiy Storchaka in branch '3.3': |
This is fixed now, right? |
Both this backport and bpo-20404 will make it into 3.3.5rc2. |
New changeset cf6e782a7f94 by Serhiy Storchaka in branch '2.7': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: