-
-
Notifications
You must be signed in to change notification settings - Fork 30.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster marshalling #67533
Comments
Currently writing marshalled data to buffer is not very efficient. Data is written byte by byte with testing conditions p->fp != NULL and p->ptr != p->end for every byte. Proposed patch makes writing to buffer faster. Benchmark results: $ ./python -m timeit -s "import marshal; d = compile(open('Lib/_pydecimal.py').read(), '_pydecimal.py', 'exec')" -- "marshal.dumps(d)"
Unpatched: 100 loops, best of 3: 4.64 msec per loop
Patched: 100 loops, best of 3: 3.39 msec per loop
$ ./python -m timeit -s "import marshal; a = ['%010x' % i for i in range(10**4)]" -- "marshal.dumps(a)"
Unpatched: 1000 loops, best of 3: 1.96 msec per loop
Patched: 1000 loops, best of 3: 1.32 msec per loop
$ ./python -m timeit -s "import marshal; a = ['%0100x' % i for i in range(10**4)]" -- "marshal.dumps(a)"
Unpatched: 100 loops, best of 3: 10.3 msec per loop
Patched: 100 loops, best of 3: 3.39 msec per loop |
looks good to me, although it has been pointed out that marshal _write_ speed is less critical than read speed :) |
New changeset bb05f845e7dc by Serhiy Storchaka in branch 'default': |
Thank you for your review Kristján. Together with bpo-20416 this increases dumping speed almost twice for typical module data and up to 5x for some data. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: