Skip to content

Additional ir step between ast and llvm ir

mmp edited this page Sep 14, 2011 · 4 revisions

The current flow of computation is parse - AST - LLVM IR - codegen. This has the shortcoming that it means that we don't have a good intermediate representation that reflects the control flow of the original program and thus, we can't easily do various analysis passes that would be desirable.

Motivation

More specifically, consider a program with a 'varying' if test. The LLVM IR that we generate will be a single basic block, with both the 'if' and 'else' code in there, just with mask modification instructions around them--we've lost clear visibility into what the program is actually doing. This means that, for example, we can't easily determine that any code after the following 'if' test won't be reached if we're just looking at the LLVM IR we generate.

float foo(...) {
    if (/*varying test */)
        return ...;
    else
        return ...;
    // this will never be reached
}

(It would be nice to issue a warning about a missing return statement for code like the following, while not incorrectly issuing one for code like the above.)

float foo(...) {
    if (/* varying test */)
        return ...;
    // oops, missing return statement here
}

Another application is that we could hoist computations that are common to the if and else blocks of a predicated if statement outside of the if statement--these are notably wasteful in the predicated case. A specific case of this is masked loads where the same vector is loaded on both sides of if/else, such that we could just do a single unmasked load outside of the if tests.

Yet another application is that we could safely emit masked if statements (i.e. run both sides regardless of if any program instances want to run them), which is a win for simple if statements, when safe. We'd perform early optimizations like function inlining, constant propagation, etc, and could then determine if what we had was reasonably simple and if it wasn't doing anything potentially unsafe if no program instances were running (function call, load/store, ...)

A more important application of a better IR in the middle would be that it would let us do dataflow analysis for things like determining if a varying variable could actually be represented as a 'uniform'. We could issue a notice about this case, or do an optimization that made this transformation. Even better, we could determine if a variable could be uniform for part of the execution of a function and only promote it to varying once it really had to be varying--this could enable better code generation for the part of the function where it was in fact just uniform.

A lot of other analysis passes aren't currently possible since we don't have the pre-SIMD-ified CFG...

Possible approach

The root issue is that the LLVM IR only represents scalar control flow; it doesn't have the general notion of "branching" based on a per-vector-lane test. (This is not unreasonable!). It would be nice to not have to invent our own IR from scratch, especially since LLVM-isms have crept pretty far into the compiler.

One potential approach is to continue to generate LLVM basic blocks as we do now for basic blocks of ispc code, but then have a SPMD CFG representation on top of it--i.e. our own little wrapper that encodes 'varying' control flow. We could then do analysis based on this, before lowering it to regular LLVM IR.

One possible issue with that approach is whether we'd have lost useful higher-level information about the types and such by going to LLVM IR for the basic blocks. This possibly could be mitigated with custom LLVM IR metadata.

Note that we probably don't want a fully general CFG with jumps, but probably want a higher-level representation that still maintins higher-order notions of loops, if tests, etc. This would probably lead to a nice simplification of the control-flow stuff in stmt.cpp, with all of the lowering of control flow to SPMD-on-SIMD localized elsewhere, which would e nice.