-
-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new-disp has a performance issues? #4525
Comments
for bigger iterations: 2021.07 VS new-disp version:
|
new-disp still basically does not do any inlining. Until it does, performance is going to be sub-optimal. |
so you mean, it's better to check by running Raku code straight, not as |
No, I'm saying it is no news that new-disp has performance issues. And that basically there is little point in this issue until @jnthn has indicated that performance on the new-disp branch is comparable to the master branch. |
ok. anyway, when run the last test as raku code straight, times are comparable ... , not like when one runs as |
THAT I find strange. Are you sure when you run as raku code "straight" as you say, you are in fact using the new-disp branch? |
yes. here is what I get. however, my system rakudo ( 2021.07 ) is faster then 0.0055874 VS 0.0112757 maybe this ok ...
|
It's going to be far more complex than that, alas. Soon we'll reach a point where the obvious missing optimizations that we depend heavily on have been reinstated and debugged. The key optimization still being pieced back together is inlining: when nearly every operator is at the minimum a multiple dispatch (and many then do a further method call), Raku peak performance is hugely reliant on it. Thus for now any measurements of peak performance (large numbers of iterations, enough the specializer/JIT get their hands on it) are liable to be underwhelming. I expect we'll look much better on that front by next week (maybe this week if things go really well). However, even inlining rates in the profiler aren't going to be easily comparable. For example, take applying sink context. The majority of the time the More broadly,
There will be significant wins for a wide range of language constructs immediately, and a bunch of new optimization opportunities opened up to us in the future, which is great. For the monomorphic majority of method and multi dispatches (that is, a given program point always calls the same target at runtime), peak performance should be little different from However, we're going to pay somewhat for the new flexibility in warm-up time; small numbers of iterations are liable to be a bit slower, primarily because the first iteration is likely going to be a bit slower, because we're doing some more setup work the first time we hit each method call or multiple dispatch. (We're also liable to find some megamorphic points in programs a bit more painful at first, because the previous strategy was a bit better for them, however there's already some work on coping with those better.) |
@melezhik Could you retest and see if this is still an issue |
@lizmat sure, I will try to do it a bit latter, sorry, quite busy at the moment |
@lizmat I would say now, v2022.02 and HTH
|
another comparison between
|
so for this only case
|
and probably for this one:
|
Those are doing so little work inside them that they're mostly measuring startup, which is known to be slightly slower after new-disp. |
I believe this is now safe to close. Please open any other performance issues against main. |
I chose previous r3 test and it shows some time increase for
new-disp
branch, especially the last example :0.0088719 VS 0.0102194
The text was updated successfully, but these errors were encountered: