New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encoding a unicode with unicode() and ignoring errors #63063
Comments
I've come up with the following series of minimal examples to demonstrate my bug. >>> unicode("")
u''
>>> unicode("", errors="ignore")
u''
>>> unicode("abcü")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128)
>>> unicode("abcü", errors="ignore")
u'abc'
>>> unicode(3)
u'3'
>>> unicode(3, errors="ignore")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, int found
>>> unicode(unicode(""))
u''
>>> unicode(unicode(""), errors="ignore")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: decoding Unicode is not supported The first two pairs of mini-programs are reasonable behaviour. If the errors parameter is set to "ignore", no additional errors are thrown, but characters that produce encoding errors are skipped in the output, as expected. The third pair of mini-programs can be solved by instead writing unicode(str(3), errors="ignore"). This should likely be done automatically, given the fact that unicode(3) behaves as expected, and properly converts between types. The fact that the conversion is done automatically without the errors parameter leads me to believe that there is a logic problem with the code, where the setting errors="ignore" changes the path of execution by more than just skipping characters that cause encoding errors. The fourth pair of mini-programs is simply baffling. The first mini-program clearly demonstrates that decoding a Unicode object is in fact supported. The fact that the second mini-program claims it's not supported further demonstrates that the logic depends on the errors="ignore" parameter more than it should. |
See http://docs.python.org/2/library/functions.html#unicode. It appears to me that unicode() is behaving exactly as documented. In particular: "If encoding and/or errors are given, unicode() will decode the object which can either be an 8-bit string or a character buffer using the codec for encoding." "If no optional parameters are given, unicode() will mimic the behaviour of str() except that it returns Unicode strings instead of 8-bit strings. More precisely, if object is a Unicode string or subclass it will return that Unicode string without any additional decoding applied." One can argue about whether this documented behavior makes the most sense but, since it is documented to behave that way and that any significant changes to that behavior at this late stage of the life of Python 2 could break existing programs, I think there will be little support for making such a change now. Sorry! |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: