-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce a new SSA-based intermediate format for the compiler #1935
Conversation
The new code generator will use Y registers as a destination for binary construction and matching instructions. v3_codegen would always first store terms in an X register and it would be the responsibility of the optimization passes to optimize the extra moves.
Nicer to read and less confusion.
The new code generator will more aggressively reuse registers, so we must be more careful about updating the state for try/catch. In particular, an "empty" try/catch that can't throw an exception must not update the try/catch state.
Since the compiler will start optimizing more aggressively, beam_validator must keep up and improve the recognization of tuples and maps.
If we transfer state appropriately to labels that can't be reached, the state could taint other labels.
The func_info instruction does not expect a stack frame. There will be an assertion failure in the debug-compiled runtime system.
Smarter code generation means that beam_validator must be smarter too. In the following example, beam_validator must be able to infer that y0 refers to a map: move x0 y0 test is_map L1 x0 %% Here the type for y0 must be 'map'.
Don't match exact BEAM instructions when trying to recognize function_clause exceptions that should be replaced with a jump to the func_info instruction. Instead, do a symbolic evaluation of the list building code and see if the result is a list of the argument registers. While at it, also teach fix_block_1/2 to completely remove a test_heap instruction before an exception generation instruction.
This will enable more optimizations.
As a preparation for replacing v3_codegen with a new code generator, remove unsafe optimization passes. Especially the older compiler passes have implicit assumptions about how the code is generated. Remove the optimizations in beam_block (keep the code that creates blocks) because they are unsafe. beam_block also calls beam_utils:live_opt/1, which is unsafe. Remove beam_type because it calls beam_utils:live_opt/1, and also because it recalculates the number of heaps words and number of live registers in allocation instructions, thus potentially hiding bugs in other passes. Remove beam_receive because it is unsafe. Remove beam_record because it is the only remaining user of beam_utils:anno_defs/1. Remove beam_reorder because it makes much more sense to run it as an early SSA-based optimization pass. Remove the now unused functions in beam_utils: anno_def/1 delete_annos/1 is_killed_block/2 live_opt/1 usage/3 Note that the following test cases will fail because of the removed optimizations: compile_SUITE:optimized_guards/1 compile_SUITE:bc_options/1 receive_SUITE:ref_opt/1
The removal of redundant bs_restore2 instructions is done easier on the SSA format. Keep the rest of the optimizations, because they are easier to do on the BEAM instructions.
This optimization can be better done in the SSA format before code generation.
If one of the destination registers for get_map_elements is the same as the map source, extract that element last.
@@ -110,6 +110,8 @@ undo_rename({test,has_map_fields,Fail,[Src|List]}) -> | |||
{test,has_map_fields,Fail,Src,{list,List}}; | |||
undo_rename({get_map_elements,Fail,Src,{list,List}}) -> | |||
{get_map_elements,Fail,Src,{list,List}}; | |||
undo_rename({test,is_eq_exact,Fail,[Src,nil]}) -> | |||
{test,is_nil,Fail,[Src]}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible the nil
value will be the first argument? If so, I think this should cover both cases.
The previous code in bif_to_test
would convert all [] == X
, [] =:= X
, X == []
, and X =:= []
to the is_nil
instruction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible the
nil
value will be the first argument?
No, that should not be possible.
There is this clause in beam_utils:bif_to_test/3
that places any literal argument in the second position:
bif_to_test('=:=', [C,A], Fail) when ?is_const(C) ->
{test,is_eq_exact,Fail,[A,C]};
Here is the definition of is_const
:
-define(is_const(Val), (Val =:= nil orelse
element(1, Val) =:= integer orelse
element(1, Val) =:= float orelse
element(1, Val) =:= atom orelse
element(1, Val) =:= literal)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome. Sorry for the confusion.
e8ceda2
to
1380e63
Compare
v3_codegen is replaced by three new passes: * beam_kernel_to_ssa which translates the Kernel Erlang format to a new SSA-based intermediate format. * beam_ssa_pre_codegen which prepares the SSA-based format for code generation, including register allocation. Registers are allocated using the linear scan algorithm. * beam_ssa_codegen which generates BEAM assembly code from the SSA-based format. It easier and more effective to optimize the SSA-based format before X and Y registers have been assigned. The current optimization passes constantly have to make sure no "holes" in the X register assignments are created (that is, that no X register becomes undefined that an allocation instruction depends on). This commit also introduces the following optimizations: * Replacing of tuple matching of records with the is_tagged_tuple instruction. (Replacing beam_record.) * Sinking of get_tuple_element instructions to just before the first use of the extracted values. As well as potentially avoiding extracting tuple elements when they are not actually used on all executions paths, this optimization could also reduce the number values that will need to be stored in Y registers. (Similar to beam_reorder, but more effective.) * Live optimizations, removing the definition of a variable that is not subsequently used (provided that the operation has no side effects), as well strength reduction of binary matching by replacing the extraction of value from a binary with a skip instruction. (Used to be done by beam_block, beam_utils, and v3_codegen.) * Removal of redundant bs_restore2 instructions. (Formerly done by beam_bs.) * Type-based optimizations across branches. More effective than the old beam_type pass that only did type-based optimizations in basic blocks. * Optimization of floating point instructions. (Formerly done by beam_type.) * Optimization of receive statements to introduce recv_mark and recv_set instructions. More effective with far fewer restrictions on what instructions are allowed between creating the reference and entering the receive statement. * Common subexpression elimination. (Formerly done by beam_block.)
038af1c
to
b3af7a2
Compare
Hi @bjorng! This looks very interesting, happy to see it merged! Are there any existing passes that you suspect that it could be cleaner (i.e. less lines of code) or more efficient (i.e. applies in more cases) if moved to SSA? If so, I would love to explore it, as a way to get more familiar with how SSA works. Thanks! |
Here are some passes that I think could benefit from SSA:
The sub pass
The minor optimizations in Optimization of Getting rid of the call to A pass I am not sure about is I am currently working on a replacement for |
Thanks! I will let you know when I get started with any those (possibly the dsetel one) to make sure there is no work duplication. |
This pull request introduces a Static Single Assignment (SSA) format as a new intermediate format in the compiler. The code generator as well as several optimization passes has been completely rewritten.
It is easier to write many kind optimizations for new SSA-based intermediate format than for the BEAM assembly language. We plan to introduce significantly improved optimizations of binary matching later this year.
For more details about the new compiler passes, see the individual commit messagges in this branch (especially "Introduce a new SSA-based intermediate format").
We hope to merge this pull request in one or two weeks. We'll try to fix any major bugs (compiler crashes or incorrectly generated code) before merging.