-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT Information Discussion #32
Comments
You can tell from the stacks if JS which monkey is working (interpreter, baseline, ion). Generally interesting would be also how much time generally is spend in jitting and invalidation. |
As further discussed, you can't tell from the stack. But as it turns out, the original SPS profile already has this information, and the new cleopatra is no longer discarding it, so we actually have it in hand. (!) I've updated the summary view to separate out ion/baseline script execution, though that's for the whole profile. I took a stab at exposing that information in several more places, but it seems I will need to learn React first. (And maybe, y'know, modern web development, and all that.) But this issue is really mostly about the JIT Coach stuff, so I'll file issues for the stuff I wanted to add. |
😂 |
This thread isn't actionable by itself, but I'm leaving it open for now as a discussion to reference in follow-up work. Once we have some action items this will probably be fine to close. |
I'm closing this as it was more of a discussion thread. |
JIT information is gathered in the profiles and this should be displayed for analysis.
Overview
I'm working on understanding how all of this works, so this is a summary of my understanding at this point, which may not be 100% correct. JavaScript strings go through the parser and then are turned into machine code. This is then run by the interpreter. From there the code goes to baseline where instrumentation is added so that the polymorphic nature of JavaScript can be analyzed for how it is actually being used. The Ion Monkey compiler is much more strict and implements certain strategies for accessing data in a monomorphic manner, thus it is much more optimized and quick since it doesn't have to include additional overhead of extensive checks for data types. However, if this optimized code is called with differently shaped data, then this optimization can fail. This is a bail out, and the code has to go back down to Ion Monkey, which is a performance cost of converting that data back down to baseline.
JIT Information in Profiler
I probably need to hunt down more specifics about the data exposed about the JIT, but it takes the form of a list of decisions associated with individual frames, like the following.
There is a huge amount of information with the JIT, and we don't want developers to waste their time on optimizing code that doesn't warrant this micro-optimizations. The following metrics are probably useful for exposing in views.
JIT De-Optimization Rank
We care about hot functions that are executed frequently, and we care about optimization failures that actually affect performance. Thus we have two different metrics.
The most offending code that should be targeted for optimizations then is the code that has both spent a lot of time executing, and that has bad JIT de-optimizations. Any code that doesn't fit these cases should be ignored.
Bail-Out Rate
Bail outs are bad for Ion code. There is an expensive cost of moving from Ion to baseline. In addition this means that code that is probably hot has moved back to baseline which is slower. Surfacing bail-outs for function calls would probably be a useful metric for finding code that is de-optimized and slowing things down. I'm not sure if this means bail-out rates for individual functions would be useful, or maybe the total sum.
Where does this information live? Would this only be available in tracelogger, or is it already exposed in the gecko profiler?
Baseline vs Ion
Finally it would be helpful to see how much time code is actually spending in baseline vs Ion. This information is probably only availabe through tracelogger.
Raw JIT Information
Probably the most dangerous for over-analysis is the individual JIT information associated with each frame. This is currently exposed in the performance devtools under flags, and should be accessible as well.
Prior art
The text was updated successfully, but these errors were encountered: