-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sse.IsSupported prevents auto-inlining? #11595
Comments
The IL For
The inline heuristics we use today don't resolve tokens and so the jit does not realize that the calls in this method are intrinsics and the first will fully resolve to a constant at jit time. So the cost of the inline is over-estimated and the inline is rejected. Not much we can do about this right now. A more detailed examination of the inline candidates would be helpful (and there are plenty of examples where better heuristics would give widespread benefits) but also would be cost a fair amount of jit time. It is something I'd like to address someday, perhaps when we are more confident about increasing the jit time for Tier1 jitting or we introduce even higher tiers. You will need to add public static float Sqrt_2(float x)
{
if (Sse41.IsSupported)
return MathF.Sqrt(x);
return x;
} |
Thanks @AndyAyersMS, that's what I thought - I am just wondering if I could add |
I don't believe so. The problem is that, at this point, the JIT hasn't resolved what is being called, it just sees that a call is being made and gives it a "static" weight. If we knew that (feel free to correct me if I got something mixed up here Andy 😄) |
Thank you for the clarification! |
The jit does inlining top down, so by and large the inlineability of an inline's callees has no impact on whether the inline itself is viable (this is not strictly true in some generic cases but is true enough). As Tanner says all the jit sees early on is that there is call. The state machine the jit uses to estimate code size is very crude and hasn't been updated in a long time. I made some attempts to improve on this early modelling via machine learning derived heuristics and managed to get some respectable size predictions but modelling the code quality improvements proved more elusive. And even on the code size front there were challenges in predicting cases where an inline actually reduced code size or increased it much less than one might expect given the amount of IL. Part of the challenge in evaulating inline impact by just looking at the IL stream is that there is a wide variety of expansions for some IL constructs -- calls in particular. Thus the jit can't really know what an IL level call means without mapping the token to the right runtime information and this mapping is somewhat expensive (look at what There are enough inline candidates and enough constraints on jit time that historically it has been deemed impractical to do this level of detailed analysis for every inline. Instead, the jit uses cheap but sometimes overly conservative heuristics. |
I have a small benchmark:
the only difference between
Sqrt_1
andSqrt_2
is thatif (Sse41.IsSupported)
expression which apparently prevents this method from being inlined (but is eliminated anyway) inBenchmark2()
Asm output:
PS: Tiered JIT is disabled.
The text was updated successfully, but these errors were encountered: