-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UltraJson doesn't behave the same way as json.JSONEncoder for unicode chars #156
Comments
Right, and that's a problem... |
We're looking into it. |
I have this issue too: In [1]: import json, ujson
In [2]: s = '"\ud8df\u4b61"'
In [3]: json.loads(s)
Out[3]: u'\ud8df\u4b61'
In [4]: ujson.loads(s)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-1b4d181e3a09> in <module>()
----> 1 ujson.loads(s)
ValueError: Unpaired high surrogate when decoding 'string' Python 2.7.9 |
Still reproducible on Python 3.5.1 and ujson 1.3.5. |
This is a situation where we have a Python unicode string which doesn't consist entirely of genuine Unicode characters -- some of the codepoints in the string are surrogate codepoints, which occur in a UTF-16 encoding of a string and were also repurposed in PEP 383 for losslessly encoding arbitrary mostly-UTF-8 bytestrings (like Unix filenames) in Python strings. Currently, on Python 3, we cause a UnicodeEncodeError if we try to encode such a string as JSON. It's not 100% obvious what the right thing to do here is -- this situation seems like it must reflect a bug somewhere else in the program or its environment. But * one way we can get such a string is by loading a JSON document (perhaps an invalid JSON document? anyway, we load it without error): >>> ujson.dumps(ujson.loads('"\\udcff"')) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'utf-8' codec can't encode character '\udcff' in position 0: surrogates not allowed * we already pass these strings through without complaint on Python 2; * as the included test shows, passing these through matches the behavior of the stdlib's `json` module. So it seems best to pass them through. Fixes ultrajson#156.
This is a situation where we have a Python unicode string which doesn't consist entirely of genuine Unicode characters -- some of the codepoints in the string are surrogate codepoints, which occur in a UTF-16 encoding of a string and were also repurposed in PEP 383 for losslessly encoding arbitrary mostly-UTF-8 bytestrings (like Unix filenames) in Python strings. Currently, on Python 3, we cause a UnicodeEncodeError if we try to encode such a string as JSON. It's not 100% obvious what the right thing to do here is -- this situation seems like it must reflect a bug somewhere else in the program or its environment. But * one way we can get such a string is by loading a JSON document (perhaps an invalid JSON document? anyway, we load it without error): >>> ujson.dumps(ujson.loads('"\\udcff"')) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'utf-8' codec can't encode character '\udcff' in position 0: surrogates not allowed * we already pass these strings through without complaint on Python 2; * as the included test shows, passing these through matches the behavior of the stdlib's `json` module. So it seems best to pass them through. Fixes ultrajson#156.
Still happens on Python 2.7.12 and ujson==1.35 |
The examples presented in this ticket pass strings with invalid characters — isolated surrogates — to So my vote for this ticket: Not a bug but a feature. |
This allows surrogates anywhere in the input, compatible with the json module from the standard library. This also refactors two interfaces: - The PyUnicode to char* conversion is moved into its own function, separated from the JSONTypeContext handling, so it can be reused for other things in the future. - Converting the char* output to a Python string with surrogates intact requires the string length for PyUnicode_Decode (or any of its alternatives). While strlen could be used, the length is already known inside the encoder, so the encoder function now also takes an extra size_t pointer argument to return that. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's __json__ method return value were to contain them. Fixes ultrajson#156 Fixes ultrajson#447 Supersedes ultrajson#284
This allows surrogates anywhere in the input, compatible with the json module from the standard library. This also refactors two interfaces: - The `PyUnicode` to `char*` conversion is moved into its own function, separated from the `JSONTypeContext` handling, so it can be reused for other things in the future (e.g. indentation and separators) which don't have a type context. - Converting the `char*` output to a Python string with surrogates intact requires the string length for `PyUnicode_Decode` & Co. While `strlen` could be used, the length is already known inside the encoder, so the encoder function now also takes an extra `size_t` pointer argument to return that and no longer NUL-terminates the string. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's `__json__` method return value were to contain them. Fixes ultrajson#156 Fixes ultrajson#447 Supersedes ultrajson#284
This allows surrogates anywhere in the input, compatible with the json module from the standard library. This also refactors two interfaces: - The `PyUnicode` to `char*` conversion is moved into its own function, separated from the `JSONTypeContext` handling, so it can be reused for other things in the future (e.g. indentation and separators) which don't have a type context. - Converting the `char*` output to a Python string with surrogates intact requires the string length for `PyUnicode_Decode` & Co. While `strlen` could be used, the length is already known inside the encoder, so the encoder function now also takes an extra `size_t` pointer argument to return that and no longer NUL-terminates the string. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's `__json__` method return value were to contain them. Fixes ultrajson#156 Fixes ultrajson#447 Supersedes ultrajson#284
This allows surrogates anywhere in the input, compatible with the json module from the standard library. This also refactors two interfaces: - The `PyUnicode` to `char*` conversion is moved into its own function, separated from the `JSONTypeContext` handling, so it can be reused for other things in the future (e.g. indentation and separators) which don't have a type context. - Converting the `char*` output to a Python string with surrogates intact requires the string length for `PyUnicode_Decode` & Co. While `strlen` could be used, the length is already known inside the encoder, so the encoder function now also takes an extra `size_t` pointer argument to return that and no longer NUL-terminates the string. This also permits output that contains NUL bytes (even though that would be invalid JSON), e.g. if an object's `__json__` method return value were to contain them. Fixes ultrajson#156 Fixes ultrajson#447 Fixes ultrajson#537 Supersedes ultrajson#284
As stated in issue #155:
When using ujson:
The text was updated successfully, but these errors were encountered: