New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try Specializing the Abs[] function #33
Conversation
Fix bug in session.get_settings()..
It just makes thing unnecessarily wordy
Small changes to improve documentation and layout. Also stricter signature checking on cylinder.
Graphics3D doc and format tweaks
…-cuboid3dbox Fix bug in `Cuboid3DBox`
…-cuboid3dbox Fix bug in `Cuboid`'s `TeXForm`
Cuboid 2nd example. Possible workaround to ensure the 2nd test gets into the doc.
Fix position of cuboids in Asymptote
Create cuboid3dbox
Add Mathics Core version to PDF title
Add colophon listing versions used to build doc
Map `Cuboid` with 2d coordinates to `Rectangle`
…ze-calculation Multiply pointSize by 0.5 to follow the new 3d graphics API
Update point size calculation
Segregate image internals
Add transformation functions...
Add section summaries for date & time chapter
Assignment refactory - Stage 1
They don't work. Add -T/--trace-builtins to do tracing at CLI level For example `mathics -e 1+1 -T` is a oneshot command to get traces `mathics -T` also works. Some minor corrections and tweaks
Comment out PrintTrace and ClearTrace for now...
Add a traced version of rules.py ...
600fa03
to
7405aed
Compare
Background: previous benchmarking shows that we are seeing a lot of time spent in I wanted to drill down on this to understand better. The big picture is that for these functions we have some general-purpose routines, and if we customize this we might speed things up. I picked Why we might not want to go down this route: This is the kind of thing pyston and PyPy should also be good at, and they too seem to get the same level of speedup. It is a bit of effort to do manually, so we should decide if anyone is up for doing this. I don't think the code necessarily looks worse with the specialization but it is more code. It might be possible to do something like this automatically. (That is, this is what a compiler does). If we go down this route though we should extend the BuiltinTrace to separate which mpmath/sympy functions are used the most. |
The question is where we use more time: calling the specific routine of finding the built-in function that matches with the expression pattern. I guess that is the second, since |
Feel free to make experiments to find out. Right now we should be making benchmark experiments, trying out various things, and discussing the pros and cons. On my list of things to investigate is the time spent in the evaluation loop process as opposed to the functions. We've only started here. We have timings for builtins. The next thought was having a way to narrow the time in composites builtins (builtins which call other builtins) to exclude the builtin time. And along that train of thought is a way to separate evaluation loop process times from builtin times. That said, having a few of the most common builtins that we know are fast and efficient is good so that these can be used exclusively in tests that are timing more complex things. Notice the above is about measuring more than it is concrete code. This draft is more about getting a general idea and to explore feasibility than it is about expecting that the overall slowness is going to be fixed by this. It feels to me that we bounce around too much "how about this" to "how about that" without following through with either this or that. |
It looks like over 75 % of the time is spent in Before diving in and changing code for performance there, please let's split up assignment.py into its own module with submodules as suggested by the Mathematica guide. |
We get perhaps a 10%-20% improvement: ``` In[1]:= Timing[AbsFast[I + 3]] Out[1]= {0.00704927, Sqrt[10]} In[2]:= Timing[AbsFast[I + 3]] Out[2]= {0.00116204, Sqrt[10]} In[3]:= Timing[Abs[I + 3]] Out[3]= {0.00120341, Sqrt[10]} In[4]:= Timing[Abs[I]] Out[4]= {0.00029366, 1} In[5]:= Timing[Abs[I]] Out[5]= {0.00030883, 1} In[6]:= Timing[AbsFast[I]] Out[6]= {0.000244551, 1} ``` Note that due to caching, in comparisons, we should run an expression twice and use the second (cached) value. Specific specialization: We can fold in knowledge of the specific sympy and mpmath functions (`Abs` and `fabs` that get called rather than having to look them up. Because there is exactly one argument, we don't need argument looping mechanisms. In general, things like this.
653f966
to
7084697
Compare
I looked at One of the things that I think is a good thing is in the class variables to tag numpy, sympy, mpmath equivalences. However nowadays this kind of thing would be better accumulated in a YAML file like we do for operators in the mathics-scanner project. |
@rocky yes, what I meant is that as it is almost never used (3 functions), we could "inline" _MPMathFunction in their code. Anyway, if you want, I can keep _MPMathMultiFunction and just change mpmath_name to mpmath_function.
I think it's fine as is. It's good to keep some things close to the code, e.g.: I think the documentation close to the code makes we don't forgot to update it (or at least its part which is close to the code). |
In general there are lots of functions that are almost never used. Like main functions. That doesn't mean that they should be inlined though. The concept that a function can be covered by several different mpmath or sympy functions is a fact of life. |
@rocky ok, so should I do mpmath_name -> mpmath_function? |
I don't have a strong feeling about this - that |
@rocky I don't know if I was clear, what I meant: class Sin(Builtin):
mpmath_name = "sin" vs class Sin(Builtin):
mpmath_function = mpmath.sin |
Ah - ok. Yes, that makes sense. |
edcaf4c
to
6340c1b
Compare
Way out of date. |
We get perhaps a 10%-20% improvement:
Note that due to caching, in comparisons, we should run an expression twice and use the second (cached) value.
Specific specialization:
We can fold in knowledge of the specific sympy and mpmath
functions (
Abs
andfabs
) that get called rather than having to look them up.Because there is exactly one argument, we don't need argument looping mechanisms.
In general, things like this.