-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Argument Clinic: inline parsing code for functions with only positional parameters #79763
Comments
This is a continuation of bpo-23867. The proposed PR makes Argument Clinic inlining parsing code for functions with only positional parameters, i.e. functions that use PyArg_ParseTuple() and _PyArg_ParseStack() now. This saves time for parsing format strings and calling few levels of functions. It can save also a C stack, because of lesser number of nested (and potentially recursive) calls, lesser number of variables, and getting rid of a stack allocated array for "objects" which will need to be deallocated or cleaned up if overall parsing fails. PyArg_ParseTuple() and _PyArg_ParseStack() will still be used if there are parameters for which inlining converter is not supported. Unsupported converters are deprecated Py_UNICODE API ("u", "Z"), encoded strings ("es", "et"), obsolete string/bytes converters ("y", "s#", "z#"), some custom converters (DWORD, HANDLE, pid_t, intptr_t). |
Some examples: $ ./python -m timeit "format('abc')"
Unpatched: 5000000 loops, best of 5: 65 nsec per loop
Patched: 5000000 loops, best of 5: 42.4 nsec per loop
$ ./python -m timeit "'abc'.replace('x', 'y')"
Unpatched: 5000000 loops, best of 5: 101 nsec per loop
Patched: 5000000 loops, best of 5: 63.8 nsec per loop
$ ./python -m timeit "'abc'.ljust(5)"
Unpatched: 2000000 loops, best of 5: 120 nsec per loop
Patched: 5000000 loops, best of 5: 94.4 nsec per loop
$ ./python -m timeit "(1, 2, 3).index(2)"
Unpatched: 2000000 loops, best of 5: 100 nsec per loop
Patched: 5000000 loops, best of 5: 62.4 nsec per loop
$ ./python -m timeit -s "a = [1, 2, 3]" "a.index(2)"
Unpatched: 2000000 loops, best of 5: 93.8 nsec per loop
Patched: 5000000 loops, best of 5: 70.1 nsec per loop ./python -m timeit -s "import math" "math.pow(0.5, 2.0)" |
$ ./python -m timeit "format('abc')"
Unpatched: 5000000 loops, best of 5: 65 nsec per loop
Patched: 5000000 loops, best of 5: 42.4 nsec per loop -23 ns on 65 ns: this is very significant! I spent like 6 months to implement "FASTCALL" to avoid a single tuple to pass positional arguments and it was only 20 ns faster per call. Additional 23 ns make the code way faster compared than Python without FASTCALL! I estimate something like 80 ns => 42 ns: 2x faster! $ ./python -m timeit "'abc'.replace('x', 'y')"
Unpatched: 5000000 loops, best of 5: 101 nsec per loop
Patched: 5000000 loops, best of 5: 63.8 nsec per loop -38 ns on 101 ns: that's even more significant! Wow, that's very impressive! Please merge your PR, I want it now :-D Can you maybe add a vague sentence in the Optimizations section of What's New in Python 3.8 ? Something like: "Parsing positional arguments in builtin functions has been made more efficient."? I'm not sure if "builtin" is the proper term here. Functions using Argument Clinic to parse their arguments? |
I suppose that my computer is a bit faster than your, so your 20 ns can be only 15 ns or 10 ns on my computer. Run microbenchmarks on your computer to get a scale. It may be possible to save yet few nanoseconds if inline a fast path for _PyArg_CheckPositional(), but I'm going to try this later. This change is a step in a sequence. I will add a What's New note after finishing so much steps as possible. The next large step is to optimize argument parsing for functions with keyword parameters. |
Is it possible to run custom builds or benchmark of this once merged on speed.python.org ? I hope this give will be a noticeable dip in the benchmark graphs. |
I can trigger a benchmark run on speed.python.org once the change is merged. |
Added Stefan because the new C API could be used in Cython after stabilizing. We should more cooperate with Cython team and provide a (semi-)official stable API for using in Cython. I do not expect large affect on most tests, since this optimization affects only a part of functions, and can be noticeable only for very fast function calls. |
It might be worth inlining a fast path of "_PyArg_CheckPositional()" that only tests "nargs < min || nargs > max" (even via a macro), and then branches to the full error checking and reporting code only if that fails. Determining the concrete exception to raise is not time critical, but the good case is. Also, that would immediately collapse into "nargs != minmax" for the cases where "min == max", i.e. we expect an exact number of arguments. And yes, a function that raises the expected exception with the expected error message for a hand full of common cases would be nice. :) |
I converted msg333446 into attached bench.py using perf. Results on my laptop: vstinner@apu$ ./python -m perf compare_to ref.json inlined.json --table -G The speedup on my laptop is between 30.7 and 38.0 ns per function call, on these specific functions. 1.7x faster on format() is very welcome, well done Serhiy! Note: You need the just released perf 1.6.0 version to run this benchmark ;-) |
PR 11520 additionally replaces PyArg_UnpackTuple() and _PyArg_UnpackStack() with _PyArg_CheckPositional() and inlined code in Argument Clinic. Some examples for PR 11520: $ ./python -m timeit "'abc'.strip()"
Unpatched: 5000000 loops, best of 5: 51.2 nsec per loop
Patched: 5000000 loops, best of 5: 45.8 nsec per loop
$ ./python -m timeit -s "d = {'a': 1}" "d.get('a')"
Unpatched: 5000000 loops, best of 5: 55 nsec per loop
Patched: 5000000 loops, best of 5: 51.1 nsec per loop
$ ./python -m timeit "divmod(5, 2)"
Unpatched: 5000000 loops, best of 5: 87 nsec per loop
Patched: 5000000 loops, best of 5: 80.6 nsec per loop
$ ./python -m timeit "hasattr(1, 'numerator')"
Unpatched: 5000000 loops, best of 5: 62.4 nsec per loop
Patched: 5000000 loops, best of 5: 54.8 nsec per loop
$ ./python -m timeit "isinstance(1, int)"
Unpatched: 5000000 loops, best of 5: 62.7 nsec per loop
Patched: 5000000 loops, best of 5: 54.1 nsec per loop
$ ./python -m timeit -s "from math import gcd" "gcd(6, 10)"
Unpatched: 2000000 loops, best of 5: 99.6 nsec per loop
Patched: 5000000 loops, best of 5: 89.9 nsec per loop
$ ./python -m timeit -s "from operator import add" "add(1, 2)"
Unpatched: 5000000 loops, best of 5: 40.7 nsec per loop
Patched: 10000000 loops, best of 5: 32.6 nsec per loop |
$ ./python -m timeit -s "from operator import add" "add(1, 2)"
Unpatched: 5000000 loops, best of 5: 40.7 nsec per loop
Patched: 10000000 loops, best of 5: 32.6 nsec per loop We should stop you, or the timing will become negative if you continue! |
PR 11524 performs the same kind of changes as PR 11520, but for handwritten code (only if this causes noticeable speed up). Also iter() is now use the fast call convention. $ ./python -m timeit "iter(())"
Unpatched: 5000000 loops, best of 5: 82.8 nsec per loop
Patched: 5000000 loops, best of 5: 56.3 nsec per loop
$ ./python -m timeit -s "it = iter([])" "next(it, None)"
Unpatched: 5000000 loops, best of 5: 54.1 nsec per loop
Patched: 5000000 loops, best of 5: 44.9 nsec per loop
$ ./python -m timeit "getattr(1, 'numerator')"
Unpatched: 5000000 loops, best of 5: 63.6 nsec per loop
Patched: 5000000 loops, best of 5: 57.5 nsec per loop
$ ./python -m timeit -s "from operator import attrgetter; f = attrgetter('numerator')" "f(1)"
Unpatched: 5000000 loops, best of 5: 64.1 nsec per loop
Patched: 5000000 loops, best of 5: 56.8 nsec per loop
$ ./python -m timeit -s "from operator import methodcaller; f = methodcaller('conjugate')" "f(1)"
Unpatched: 5000000 loops, best of 5: 79.5 nsec per loop
Patched: 5000000 loops, best of 5: 74.1 nsec per loop It is possible to speed up also many math methods and maybe some contextvar and hamt methods, but this is for other issues. |
Nice! Well done, Serhiy! |
$ ./python -m timeit "iter(())"
Unpatched: 5000000 loops, best of 5: 82.8 nsec per loop
Patched: 5000000 loops, best of 5: 56.3 nsec per loop That's quite significant. Oh, it's because you converted builtin_iter() from METH_VARARGS to METH_FASTCALL at the same time. Interesting. |
Just inlining the arg tuple unpacking in iter() give only 10% speed up. I would not apply this optimization for such small difference. But with converting it to fast call it looks more interesting. |
Are there any numbers on higher-level benchmarks? |
I do not expect significant changes in higher-level benchmarks. But if there are some, they can be shown on speed.python.org. I this all work on this stage is finished. |
I ran benchmarks on speed.python.org, it's the 5bb146a dot (Jan 13, 2019). I didn't look at results. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: