-
-
Notifications
You must be signed in to change notification settings - Fork 31k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
speed up marshal.loads() #63418
Comments
This patch contains assorted improvements for unmarshalling pyc files. It will also make them ~10% smaller. $ ./python -m timeit -s "import marshal; d=marshal.dumps(tuple((i, str(i)) for i in range(1000)))" "marshal.loads(d)" -> 3.4 unpatched: 232 usec per loop |
Why adding ASCII strings, whereas you can add Latin1 (UCS1, U+0000-U+00FF) strings? |
Reasons:
The aim here is to optimize the common cases. There is no reason to further complicate the code for rare cases. |
Oh, I forgot this pain of the PEP-393. Don't tell me more, it's enough :-) |
I imagine that the test for ASCII is cheaper. It corresponds to the new compact internal unicode representation (one byte characters). This looks fine. Can you quantify where the speedup comes from? Reading the code, I see we now maintain a small internal buffer in the file object, rather than using stack allocation at the call sites. It is unclear to me how this saves memory, since the amount of memory copying should be the same. Could it be that the speedup is all due to the native 8 bit support for unicode? Have you looked at providing a special opcode for a few other magic numbers?(We have that in our own custom marshal format) |
From all changes, but mainly the ASCII special-casing and the new
No, memory copying is suppressed in many cases.
It shouldn't be useful since marshal memoizes them anyway: only the |
To clarify: the import logic uses marshal.loads(), not marshal.load(). |
Right, in this case, memory copying is avoided. Regarding the memoing of 0, empty tuple, etc: |
You should ensure that loaded bytes are ASCII-only. Otherwise broken or malicious marshalled data will compromise you program. Decoding UTF-8 is so fast as decoding ASCII (with checks) and is almost so fast as memcpy. As for output, we could use cached UTF-8 representation of string (always exists for ASCII only strings) before calling PyUnicode_AsUTF8String(). I'm good with buffering and codes for short strings and tuples (I have not examined a code closely yet), but special casing ASCII looks not so good to me. |
"You should ensure that loaded bytes are ASCII-only. Otherwise broken or malicious marshalled data will compromise you program." This is not new, see the red warning in marshal doc: """ The marshal module is not intended to be secure against erroneous or maliciously constructed data. Never unmarshal data received from an untrusted or unauthenticated source. |
We have to make two distinctions here:
So, will simply load ASCII data that is, in fact, not ASCII data, destabilize your program in any way? Or even crash it? If that is true, then we have a problem. |
"As for output, we could use cached UTF-8 representation of string (always exists for ASCII only strings) before calling PyUnicode_AsUTF8String()." PyUnicode_AsEncodedString(v, "utf8", "surrogatepass") is expensive. I proposed an optimization for the pickle module, Antoine finished the work: see issue bpo-15596. It's exactly what you suggest: reuse PyUnicode_AsUTF8String(). |
It's an unsupported use case. The marshal docs are quite clear: """Therefore, the Python maintainers reserve the right to modify So, it's a "good idea" as long as you're willing to deal with the |
marshal and pickle are unsafe, even without the patch attached to the issue. If you consider that it is an issue that should be fixed, please open a new issue. Antoine's patch doesn't make the module less secure, since it was already not secure :) Loading untrusted data and executing untrusted code is not supported by Python. Many things should be fixed to support such use case, not only the marshal module. I'm interested by the topic (I wrote the pysandbox project, which is first try), but please discuss it elsewhere. |
Hum, I'm not sure that this word exist, I mean: somethere else :-) |
"Therefore, the Python maintainers reserve the right to modify sure, don't expect such things to survive version changes. (Actually, they have been hitherto, and my version "3" I actually changed, to be so, the initial draft being unable to read version 2 data) |
"Loading untrusted data ... is not supported by Python." |
Anyway, whether or not Pyhon guarantees this and that wrt. "untrusted" is beside the point and offtopic, as Victor poitns out. However: We are, and have always been, careful to fail gracefully if we detect data corruption. Never should the flipping of a bit on a file on the disk cause our program to crash. It is fine when reading a corrupt file that an int(1) turns to int(2), or that we get an UnmarshalError when reading it, or that a "hello world" string turns to "hello $orld". What is not good is if the reading of the corrupt string causes the interpreter to crash. My knowledge of the new unicode internals is limited at best. If you don't think, Antoine, that putting non-7-bit data into the supposedly 7 bit ascii unicode data can cause an actual crash, but at worst a corrupt string, then I'm quite happy, personally :) |
Then we can simplify the marshal module by dropping all error handling: f.read() returned not bytes, read() returned too much data, EOF read where not expected, recursion limit exceeded, long/string/unicode/tuple/list/set size out of range, unnormalized long data, digit out of range in long, index list too large, invalid reference, unknown type code, NULL object in marshal data for set, UTF8 decoding errors, string to float converting errors, etc, etc. Sorry for sarcasm.
Actually _PyUnicode_UTF8(). PyUnicode_AsUTF8String() creates UTF8 cache if it is not exists and this can be not desired. We could use this optimization in many other places, in particular in PyUnicode_AsUTF8String() itself. |
Well, indeed, the sarcasm is undeserved here, if the interpreter cannot
I don't understand how _PyUnicode_UTF8() can be used for *unmarshalling*. |
That said, I'll try out the patch with _PyUnicode_FromUCS1 instead of _PyUnicode_FromASCII, to see if it affects performance. |
Just for the record, I want to say that this is great stuff, Antoine! It's great when this sort of stuff gets some attention. |
I say about marshalling.
Could you try out the patch with PyUnicode_DecodeUTF8()? This will save you two opcodes and perhaps several lines of code. |
"This will save you two opcodes and perhaps several lines of code. " |
That would be a change in behaviour, since currently "surrogatepass" |
Exactly (I also tried this :-)). The problem is the version number |
How about adding a version opcode? This is a backwards compatible change, and allows us to reject unsupported versions in the future, as long as they are not very old unsupported versions. |
I meant two of new proposed opcodes: TYPE_ASCII and TYPE_ASCII_INTERNED. |
I don't propose any change in behaviour. |
You cannot change the meaning of TYPE_UNICODE (it uses "surrogatepass"). Besides, opcodes are cheap. |
You don't need new semantic. Use old semantic. The set of ASCII encoded strings is a subset of valid UTF8 encoded strings, which is a subset of UTF8 encoded with "surrogatepass" strings. |
Sorry, I don't understand you. |
Updated patch using PyUnicode_FromKindAndData for ASCII strings, to ensure that corrupt marshal bytecode doesn't provide corrupt unicode objects. Performance is within 2% of the previous patch. (however, a quick test suggests that PyUnicode_DecodeUTF8 is quite slower) |
Let a code say instead me. |
I don't understand your patch. The macros you define aren't used |
It's surprising that PyUnicode_DecodeUTF8() is quite slower than _PyUnicode_FromUCS1(). _PyUnicode_FromUCS1() calls ucs1lib_find_max_char() and then memcpy(). PyUnicode_DecodeUTF8() first tries ascii_decode() which is very similar than ucs1lib_find_max_char(). The difference is maybe that _PyUnicode_FromUCS1() copies all bytes at once (memcpy()), whereas ascii_decode() copies bytes while if the string is ASCII or not. |
New changeset 4059e871e74e by Antoine Pitrou in branch 'default': |
I've now committed the latest patch (marshal_opts5.patch). |
New changeset 2a2b339b6b59 by Christian Heimes in branch 'default': |
In this case, we can remove a bunch of 'retval = NULL' from the code. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: