-
-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support object instancing and recursion in marshal #60679
Comments
The format used by the marshal module does not support instancing. This precludes certain data optimizations, such as sharing string constants, common tuples, even common code objects. Since the marshal format is used to write compiled code, this makes it impossible to do data optimization on code prior to writing it out. This patch adds proper instancing support for all the supported types and increases the default version to three. (A separate defect/regression is that interned strings are no longer preserved as was implemented in version 1 of the original 2.x branch. This also negates any interning done at compile time. That is a separate defect.) |
Second patch which adds the missing internment support for strings, including unittests. |
marshal is only supposed to be used to serialize code objects, not arbitrary user data. Why don't you use pickle? |
This change is specifically aimed at code objects. Also, separately but related (see second patch) the effort spent in interning names when compiling, is lost when code objects are loaded back from disk. This change is based on work done at CCP to reduce the size of compiled code in memory. Simple preprocessing of code objects prior to writing them to disk can result in important memory savings. |
Shouldn't strings be interned when the code object is unmarshalled?
How important? |
Basically, reuse of strings (and preservation of their internment status) fell by the wayside somewhere in the 3.x transition. Strings have been reused, and interned strings re-interned, since protocol version 1 in 2.x. This patch adds that feature back, and uses that mechanism to reuse not only strings, but also any other multiply-referenced object. It is not desirable to simply intern all strings that are read from marshaled data. Only selected strings are interned by python during compilation and we want to keep it that way. Also, 2.x reuses not only interned strings but other strings as well. Generalizing reuse of strings to other objects is trivial, and a logical step forward. This allows optimizations to be made on code objects where common data are identified and instanced, and those code objects to be saved and reloaded with that instancing intact. But even without such code-object optimization, the changes are significant: |
I agree that restoring the string interning behaviour would be a good thing. As for the size of pyc files, who cares? Memory footprint may be useful to shrink (especially for cache efficiency reasons), but I don't see why we should try to reduce the size of on-disk bytecode. And if we do, it would probably be simpler to zlib-compress them. |
When I added interning support to marshal, I specifically cared about the size of pyc. I find it sad that this support was thrown out, so I support restoring it. I'm also skeptical about general sharing, and would like to see some specific numbers pointing out the gain of such a mechanism (compared to a version that merely preserves interned strings). |
If you have string sharing, adding support for general sharing falls automatically out without any effort. There is no reason _not_ to support it, in other words. case TYPE_CODE:
if (PyEval_GetRestricted()) {
PyErr_SetString(PyExc_RuntimeError,
"cannot unmarshal code objects in "
"restricted execution mode");
Obviously, this shows that marshal is still expected to work and be useful even if not for pickling code objects. It is good to know that you care about the size of the .pyc files, Martin. But we should bear in mind that this size difference is directly reflected in the memory use of the loaded data. A reduction by 25% of the .pyc size is roughly equivalent to a 25% memory use reduction by the loaded code object. I haven't produced data about the savings of general object reuse because it relies on my "recode" code optimizer module which is still work in progress. However, I will do some tests and let you know. Suffice to say that it is enormously frustrating to re-generate code objects with an optimization tool, sharing common or identical sub-objects and so on, and then finding that the marshal module undoes all of that. I'll report back with additional figures. |
There is no many sense to use references for TYPE_INT whose representation size not greater then a reference representation size. I doubt about references to mutable objects. |
Ok, I did some tests with my recode module. The following are the sizes of the marshal data: test2To3 ... 24748 24748 212430 212430 The columns: The lines:
As expected, there is no difference between version 3 and 4 unless I employ the recode module to fold common subobjects. This brings an additional saving of some 3% bringing the total reduction up to 28% and Note that the transform is a simple recursive folding of objects. common argument lists, such as (self) are subject to this. No renaming of local variables or other stripping is performed. Implementation note: The trick of using a bit flag on the type to indicate a slot reservation in the instance list is one that has been in use in CCP´s own "Marshal" format, a proprietary serialization format based on marshal back in 2002 (adding many more special opcodes and other stuff) Serhiy: There is no reason _not_ to reuse INT objects if we are doing it for other immutables to. As you note, the size of the data is the same. This will ensure that integers that are not cached can be folded into the same object, e.g. the value 123, if used in two functions, can be the same int object. I should also point out that the marshal protocol takes care to be able to serialize lists, sets and frozensets correctly, the latter being added in version 2.4. This despite the fact that code objects don't make use of these. |
Can you please measure the time of unmarshalling? It would be interesting. If you can count the statistics about marshalled types (what percent of shared and non shared integers, strings, etc), it would also be very interesting.
There is at least one reason. This increases size of the refs table. |
Code objects do use frozensets: >>> def f(x):
... return x in {1,2,3,4,5,6}
...
>>> dis.dis(f)
2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 7 (frozenset({1, 2, 3, 4, 5, 6}))
6 COMPARE_OP 6 (in)
9 RETURN_VALUE I don't think marshal supports any type that isn't (or hasn't been)
The module officially intended for general-purpose serialization is
3% doesn't sound like a worthwhile improvement at all.
Why don't you release your "proprietary marshal" on pypi? You would |
Antoine, I understand that _you_ may not see any need for object references in marshal streams. Also, I am not going to try to convince you it is a good idea, since I have long figured out that you are against any contributions from me on some sort of principle. However, even if you cannot agree that it is a good idea, can you explain to me how it is a BAD idea? How can expanding object references to strings to all objects, using the same mechanism, be bad? How can it be bad to make the marshal format more complete with minimal effort? Keep in mind that this change removes a number of warnings and caveats present both in the documentation and the in-line comments. |
I suspect that Antoine's principles have very little to do with *who* the contributions originate from, and much more to do with the content of those contributions. We've all got the same overarching goal of improving and maintaining the quality of Python. Please can we not make this personal? |
Serhiy, to answer your questions:
As you see, loading time is almost halfed with version 3 and 4 compared to 2. Version 3 is also slightly faster than 4
This shows that adding instancing of all other types on top of the strings does not typically expand the instance list more than 50% |
New patch, incorporating suggested fixes from review. |
Thank you, Kristján, for the statistics. It makes your proposition more attractive. |
New patch with changes. |
Personally I don't like the use of macros inn this code. I think that without them the code would be clearer. If anyone is interested, here are the statistics for all the standard modules (Lib/__pycache__/*.pyc). UNICODE 105248 61% Strings (unicode and bytes), tuples, short ints, code objects and None in sum are 99% of all objects. Mutable collections, complex numbers, Ellipsis and StopIteration are not used at all. If size of compiled modules is a problem, we can get about 10% by using more compact representation for sizes (1- or 2-bytes). This requires additional codes for strings and collections (at least for unicode strings and tuples). |
Changes as suggested by Serhiy |
The size of the .pyc files is secondary. The size that is important is the memory footprint of loaded code objects, which can be done by stripping and folding code objects. |
It is a bad idea because features have to be supported in the long-term, And, again, I think the string interning part is a good thing. |
By the way, please follow PEP-8 in test_marshal.py. |
Here is the statistics for all pyc-files (not only in Lib/__pycache__). This includes encoding tables and tests. I count also memory usage for some types (for tuples shared size is estimated upper limit). type count % size shared % UNICODE 622812 58% 26105085 14885090 57% Total 1081517 100% 44802671 19274280 ~43% Therefore there is a sense to share unicode objects, tuples, and may be bytes objects. Most integers (in range -5..257) already interned. None of code objects can be shared (because code object contains almost unique first line number). Floats, complexes and frozensets unlikely save much of memory. |
Did you examine the sharing per file or among all files? |
Total size of all *.pyc files is 22 MB. |
Am 20.11.12 17:32, schrieb Kristján Valur Jónsson:
This really depends on whom you ask. When I did the string interning |
Am 20.11.12 18:02, schrieb Antoine Pitrou:
For marshal, this actually is of less concern - we are free to change it Of course, there still must be a demonstrated gain, and that must be |
Code objects can indeed be shared. Martin, I agree the .pyc size matters. You are right, priorities vary. I am mainly focused on memory use, while others may be looking at disk use. Disk use can of course be reduced by using tools like zip. And code objects can be re-optimized at load time too using special importers. But it is nice to be able to achieve both objectives by enabling the marshal format to preserve those optimizations that are performed on it prior to saving it. I'm currently working on the recode module. When its done, I'll report back and share it with you so that you can toy around with it. |
New changeset 01372117a5b4 by Kristján Valur Jónsson in branch 'default': |
I don't understand some of this code. Why does r_ref_reserve take a first parameter which it just returns on success without using? |
I thought I had explained this already, but can't find it, so here is the explanation: It is a convenence calling pattern because these functions will either: |
I should add comments explaining this to the file. |
I'm getting two failures after this: Traceback (most recent call last):
File "/home/wolf/dev/py/py3k/Lib/test/test_exceptions.py", line 51, in testRaising
marshal.loads(b'')
ValueError: bad marshal data (unknown type code) and ====================================================================== Traceback (most recent call last):
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/util.py", line 23, in wrapper
to_return = fxn(*args, **kwargs)
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/test_file_loader.py", line 364, in test_no_marshal
self._test_no_marshal()
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/test_file_loader.py", line 265, in _test_no_marshal
self.import_(file_path, '_temp')
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/test_file_loader.py", line 194, in import_
module = loader.load_module(module_name)
File "<frozen importlib._bootstrap>", line 572, in _check_name_wrapper
File "<frozen importlib._bootstrap>", line 1032, in load_module
File "<frozen importlib._bootstrap>", line 1013, in load_module
File "<frozen importlib._bootstrap>", line 548, in module_for_loader_wrapper
File "<frozen importlib._bootstrap>", line 869, in _load_module
File "<frozen importlib._bootstrap>", line 990, in get_code
File "<frozen importlib._bootstrap>", line 668, in _compile_bytecode
ValueError: bad marshal data (unknown type code) ====================================================================== Traceback (most recent call last):
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/test_file_loader.py", line 470, in test_no_marshal
self._test_no_marshal(del_source=True)
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/test_file_loader.py", line 265, in _test_no_marshal
self.import_(file_path, '_temp')
File "/home/wolf/dev/py/py3k/Lib/test/test_importlib/source/test_file_loader.py", line 194, in import_
module = loader.load_module(module_name)
File "<frozen importlib._bootstrap>", line 1099, in load_module
File "<frozen importlib._bootstrap>", line 548, in module_for_loader_wrapper
File "<frozen importlib._bootstrap>", line 869, in _load_module
File "<frozen importlib._bootstrap>", line 1105, in get_code
File "<frozen importlib._bootstrap>", line 668, in _compile_bytecode
ValueError: bad marshal data (unknown type code) |
Indeed, would be nice to fix the test failures. Besides, it would be extra nice if you could run the test suite *before* pushing your changes. Otherwise you're wasting everyone else's time. |
This should not have happened and it was indeed all tested. I'll investigate why these errors are happening. |
New changeset f4c21179690b by Kristján Valur Jónsson in branch 'default': New changeset 42bf74b90626 by Kristján Valur Jónsson in branch 'default': |
FTR the two failures I saw earlier are now gone. |
Yes, they were fixed with #42bf74b90626 which also added unittests in test_marshal.py to make sure invalid EOFs are always caught. |
Sorry, what does "instancing" mean? |
He means "keeping track of instance identities", so that objects
"interesting" to whom?
Definitely. It may now contain 't' (interned) codes again, and it may contain 'r' (reference) codes. The size of the pyc files may decrease. |
This is very good news! Indeed, I noticed decimal.cpython-34.pyc going from 212k to 178k. 17% less! |
Thanks, Martin, for clarifying this. I am unsure about how whatsnew is handled these days. Is it incrementally updated or managed by someone? |
It's better to add at least a stub to the whatsnew, even if someone will eventually go through it before the release. |
New changeset 84e73ace3d7e by Kristjan Valur Jonsson in branch 'default': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: