Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mono][interp] Add tiering within interpreter #68823

Merged
merged 9 commits into from
May 25, 2022

Conversation

BrzVlad
Copy link
Member

@BrzVlad BrzVlad commented May 3, 2022

Compile code initially without optimizations, recompile methods with optimizations enabled once we reach a certain threshold.

@ghost
Copy link

ghost commented May 3, 2022

Tagging subscribers to this area: @BrzVlad
See info in area-owners.md if you want to be subscribed.

Issue Details

Compile code initially without optimizations, recompile methods with optimizations enabled once we reach a certain threshold.

Author: BrzVlad
Assignees: BrzVlad
Labels:

area-Codegen-Interpreter-mono

Milestone: -

@BrzVlad
Copy link
Member Author

BrzVlad commented May 3, 2022

just checking CI for now

@SamMonoRT
Copy link
Member

Contributes to #63809

For a single MonoMethod*, we can have two InterpMethod* instances, one with optimized flag false and the other with true. When tiering is enabled, when first getting an InterpMethod* for a MonoMethod* we set the optimized flag to false. When generatig code for this method, if optimized is false we must emit a special MINT_TIER_ENTER_METHOD at the start and later in the codegen process we skip applying optimizations to method code.

MINT_TIER_ENTER_METHOD opcode is invoked with every method start and it will bump a counter. Once we hit the limit, the method will be tiered up. This process consists of creating a new InterpMethod* instance which have optimized set and storing it in the interp_code_hash, changing the mapping from the old MonoMethod. The optimized and unoptimized method use the same argument space, so tiering the method up requires just to set the ip to the start of the tiered up method code.

An additional problem that happens with tiering is that we have to replace all instances of the untiered method from generated code. InterpMethod* instances are stored stored inside data_items of other methods and also inside vtables. When generating code for any method, we have to store in a hash table mappings from untiered InterpMethod* instance to addresses where this instance was stored. When we tier up the unoptimized method, we will traverse the list of addresses where this references is stored and update it to the optimized version.
Some optimizations might not be enabled by default, so add option to enable them.
In unoptimized code, we add a patchpoint instruction when entering basic blocks that are targets of backward branches, if the stack state is empty. This means that when tiering up a frame we just need to jump to the equivalent basic block in the tiered up method and execution can continue. Since the arguments and IL locals reside in the same space in both versions of the method (their offsets are computed in interp_method_compute_offsets)
We always take jit_mm lock when finishing compilation of method, use it also for publishing InterpMethod* fields. This also prevents weird races where the method can be tiered up before the we take the jit_mm lock, resulting in publishing the seq_points for the untiered method
Once we emit a tailcall, execution in the current bblock is finished.
We were doing unsigned conversion before
@lewing
Copy link
Member

lewing commented May 13, 2022

/azp run runtime-wasm, runtime-extra-platforms

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@SamMonoRT
Copy link
Member

@lewing @BrzVlad - my suggestion, we should merge this in with Tiering enabled by default when you merge. We have a couple weeks till Preview 5 cutoff, so we can flip that switch/flag/property to disabled if we see functional or perf issues identified by then. How do you both feel about this ?

@BrzVlad BrzVlad force-pushed the feature-interp-tiering branch 3 times, most recently from f11e710 to 58ee025 Compare May 17, 2022 20:04
… enabled

When invoking these clauses we obtained the InterpMethod from the MonoMethod* and make use of the jit info stored during frame unwinding. However, the method might have been tiered up since storing the jit info, so the native offsets stored there will no longer be relative to the optimized imethod. Fetch again the MonoJitInfo* from the imethod that we will be executing.
@BrzVlad
Copy link
Member Author

BrzVlad commented May 18, 2022

This seems stable and ready for review.

I tried running some benchmarks from our own repo and didn't notice any difference. I'm not sure however if there was supposed to be any difference there, given it runs code in a loop and unoptimized compilation wouldn't make an impact. I ran the default blazor wasm sample however and tiering had a great startup improvement (around 20%) (I measured it by doing performance profiling on chrome and checked the scripting time, which was most of the time spent in startup)

I would agree with @SamMonoRT that it would be easier to merge with tiering enabled, have more iterations of test suites and benchmarking done automatically, and unset it as default (maybe just for the release) if any problems arise. @lewing Any thoughts on how to proceed with this ? Do you know if blazor runs would pick up these changes relatively soon ?

@SamMonoRT
Copy link
Member

/azp run runtime-wasm, runtime-extra-platforms

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

Copy link
Member

@lewing lewing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should merge it after the p5 branch and keep an eye out for regressions.

@lewing lewing merged commit 962a455 into dotnet:main May 25, 2022
radical added a commit to radical/runtime that referenced this pull request May 26, 2022
carlossanlop pushed a commit that referenced this pull request Jun 1, 2022
@ghost ghost locked as resolved and limited conversation to collaborators Jun 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants