Skip to content
Browse files

Switch to ORC jit infrastructure

This will simplify some aspects of funclet management, and also gets LLILC
on top of the newer jitting framework that active development is focused

ORC jits are created by composing `layers`.  LLILC's functionality is
covered by two "off-the-shelf" layers: the IRCompileLayer (using the
SimpleComplier utility) for compiling IR to machine code, and the
ObjectLinkingLayer for resolving fixups and relocations and interfacing
with the EEMemoryManager to load the machine code into appropriate memory.

Closes #610
  • Loading branch information
JosephTremoulet committed Jun 2, 2015
1 parent 941c2f0 commit 47513add13980e7a32f9b0ec3d2da3db0911bca2
Showing with 68 additions and 48 deletions.
  1. +2 −6 Documentation/
  2. +23 −1 include/Jit/LLILCJit.h
  3. +33 −33 lib/Jit/LLILCJit.cpp
  4. +10 −8 lib/Reader/readerir.cpp
@@ -61,12 +61,8 @@ BitCode creation.

[LLVM]( is a great code generator that supports lots of platforms and CPU targets. It also has
facilities to be used as both a JIT and AOT compiler. This combination of features, lots of targets, and ability
to compile across a spectrum of compile times, attracted us to LLVM. For our JIT we use the LLVM MCJIT. This
infrastructure allows us to use all the different targets supported by the MC infrastructure as a JIT. This was our
quickest path to running code. We're aware of the ORC JIT infrastructure but as the CoreCLR only notifies the JIT
to compile a method one method at a time, we currently would not get any benefit from the particular features of ORC.
(We already compile one method per module today and we don't have to do any of the inter module fixups as that is
performed by the runtime.)
to compile across a spectrum of compile times, attracted us to LLVM. For our JIT we use LLVM's ORC JIT

There is a further discussion of how we're modeling the managed code semantics within LLVM in a following
@@ -22,6 +22,8 @@
#include "llvm/IR/LLVMContext.h"
#include "llvm/Support/ManagedStatic.h"
#include "llvm/Support/ThreadLocal.h"
#include "llvm/ExecutionEngine/Orc/ObjectLinkingLayer.h"
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"

class ABIInfo;
struct LLILCJitPerThreadState;
@@ -79,7 +81,7 @@ struct LLILCJitContext {
llvm::LLVMContext *LLVMContext; ///< LLVM context for types and similar.
llvm::Module *CurrentModule; ///< Module holding LLVM IR.
llvm::ExecutionEngine *EE; ///< MCJIT execution engine.
llvm::TargetMachine *TM; ///< Target characteristics
bool HasLoadedBitCode; ///< Flag for side-loaded LLVM IR.

@@ -155,6 +157,23 @@ struct LLILCJitPerThreadState {
std::map<CORINFO_FIELD_HANDLE, uint32_t> FieldIndexMap;

/// \brief Stub \p SymbolResolver expecting no resolution requests
/// The ObjectLinkingLayer takes a SymbolResolver ctor parameter.
/// The CLR EE resolves tokens to addresses for the Jit during IL reading,
/// so no symbol resolution is actually needed at ObjLinking time.
class NullResolver : public llvm::RuntimeDyld::SymbolResolver {
llvm::RuntimeDyld::SymbolInfo findSymbol(const std::string &Name) final {
llvm_unreachable("Reader resolves tokens directly to addresses");

findSymbolInLogicalDylib(const std::string &Name) final {
llvm_unreachable("Reader resolves tokens directly to addresses");

/// \brief The Jit interface to the CoreCLR EE.
/// This class implements the Jit interface to the CoreCLR EE. The EE uses this
@@ -167,6 +186,9 @@ struct LLILCJitPerThreadState {
/// top-level invocations of the jit is held in thread local storage.
class LLILCJit : public ICorJitCompiler {
typedef llvm::orc::ObjectLinkingLayer<> LoadLayerT;
typedef llvm::orc::IRCompileLayer<LoadLayerT> CompileLayerT;

/// \brief Construct a new jit instance.
/// There is only one LLILC jit instance per process, so this
@@ -22,9 +22,6 @@
#include "abi.h"
#include "EEMemoryManager.h"
#include "llvm/CodeGen/GCs.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
#include "llvm/ExecutionEngine/MCJIT.h"
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/IRBuilder.h"
@@ -42,8 +39,10 @@
#include "llvm/Support/Format.h"
#include "llvm/Support/Signals.h"
#include "llvm/Support/CommandLine.h"
#include "llvm/Support/TargetRegistry.h"
#include "llvm/Transforms/Scalar.h"
#include "llvm/Transforms/IPO/PassManagerBuilder.h"
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
#include <string>

using namespace llvm;
@@ -172,33 +171,31 @@ CorJitResult LLILCJit::compileMethod(ICorJitInfo *JitInfo,

Context.Options = &JitOptions;

EngineBuilder Builder(std::move(M));
// Construct the TargetMachine that we will emit code for
std::string ErrStr;

std::unique_ptr<RTDyldMemoryManager> MM(new EEMemoryManager(&Context));

const llvm::Target *TheTarget =
TargetRegistry::lookupTarget(LLILC_TARGET_TRIPLE, ErrStr);
if (!TheTarget) {
errs() << "Could not create Target: " << ErrStr << "\n";
TargetOptions Options;
CodeGenOpt::Level OptLevel;
if (Context.Options->OptLevel != OptLevel::DEBUG_CODE) {
OptLevel = CodeGenOpt::Level::Default;
} else {
OptLevel = CodeGenOpt::Level::None;
// Options.NoFramePointerElim = true;
TargetMachine *TM = TheTarget->createTargetMachine(
LLILC_TARGET_TRIPLE, "", "", Options, Reloc::Default, CodeModel::Default,
Context.TM = TM;


ExecutionEngine *NewEngine = Builder.create();

if (!NewEngine) {
errs() << "Could not create ExecutionEngine: " << ErrStr << "\n";

// Don't allow the EE to search for external symbols.
Context.EE = NewEngine;
// Construct the jitting layers.
EEMemoryManager MM(&Context);
LoadLayerT Loader;
CompileLayerT Compiler(Loader, orc::SimpleCompiler(*TM));

// Now jit the method.
@@ -221,17 +218,17 @@ CorJitResult LLILCJit::compileMethod(ICorJitInfo *JitInfo,


// Don't allow the LoadLayer to search for external symbols, by supplying
// it a NullResolver.
NullResolver Resolver;
auto HandleSet =
Compiler.addModuleSet<ArrayRef<Module *>>(M.get(), &MM, &Resolver);

// You need to pick up the COFFDyld changes from the MS branch of LLVM
// or this will fail with an "Incompatible object format!" error
// from LLVM's dynamic loader.
uint64_t FunctionAddress =
*NativeEntry = (BYTE *)FunctionAddress;
*NativeEntry =
(BYTE *)Compiler.findSymbol(Context.MethodName, false).getAddress();

// TODO: ColdCodeSize, or separated code, is not enabled or included.
*NativeSizeOfCode = Context.HotCodeSize + Context.ReadOnlyDataSize;
@@ -245,13 +242,16 @@ CorJitResult LLILCJit::compileMethod(ICorJitInfo *JitInfo,
// Dump out any enabled timing info.

// Give the jit layers a chance to free resources.

// Tell the CLR that we've successfully generated code for this method.
Result = CORJIT_OK;

// Clean up a bit
delete Context.EE;
Context.EE = nullptr;
delete Context.TM;
Context.TM = nullptr;
delete Context.TheABIInfo;
Context.TheABIInfo = nullptr;

@@ -526,7 +526,7 @@ void GenIR::insertIRToKeepGenericContextAlive() {

// This method now requires a frame pointer.
TargetMachine *TM = JitContext->EE->getTargetMachine();
TargetMachine *TM = JitContext->TM;
// TM->Options.NoFramePointerElim = true;

// TODO: we must convey the offset of this local to the runtime
@@ -764,7 +764,8 @@ void GenIR::zeroInitLocals() {
// points.
StructType *StructTy = dyn_cast<StructType>(LocalTy);
if (StructTy != nullptr) {
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout =
const StructLayout *TheStructLayout =
zeroInitBlock(LocalVar, TheStructLayout->getSizeInBytes());
@@ -799,7 +800,7 @@ void GenIR::copyStruct(Type *StructTy, Value *DestinationAddress,
ReaderAlignType Alignment) {
// TODO: For small structs we may want to generate an integer StoreInst
// instead of calling a helper.
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout = &JitContext->CurrentModule->getDataLayout();
const StructLayout *TheStructLayout =
IRNode *StructSize =
@@ -1267,7 +1268,7 @@ Type *GenIR::getClassType(CORINFO_CLASS_HANDLE ClassHandle, bool IsRefClass,

// Cache the context and data layout.
LLVMContext &LLVMContext = *JitContext->LLVMContext;
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout = &JitContext->CurrentModule->getDataLayout();

// We need to fill in or create a new type for this class.
if (StructTy == nullptr) {
@@ -1714,7 +1715,7 @@ void GenIR::addFieldsRecursively(
llvm::Type *Ty) {
StructType *StructTy = dyn_cast<StructType>(Ty);
if (StructTy != nullptr) {
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout = &JitContext->CurrentModule->getDataLayout();
for (Type *SubTy : StructTy->subtypes()) {
addFieldsRecursively(Fields, Offset, SubTy);
Offset += DataLayout->getTypeSizeInBits(SubTy) / 8;
@@ -1730,7 +1731,7 @@ void GenIR::createOverlapFields(

// Prepare to create and measure types.
LLVMContext &LLVMContext = *JitContext->LLVMContext;
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout = &JitContext->CurrentModule->getDataLayout();

// Order the OverlapFields by offset.
std::sort(OverlapFields.begin(), OverlapFields.end());
@@ -2184,7 +2185,7 @@ uint32_t GenIR::addArrayFields(std::vector<llvm::Type *> &Fields, bool IsVector,
uint32_t ArrayRank, CorInfoType ElementCorType,
LLVMContext &LLVMContext = *JitContext->LLVMContext;
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout = &JitContext->CurrentModule->getDataLayout();
uint32_t FieldByteSize = 0;
// Array length is (u)int32 ....
Type *ArrayLengthTy = Type::getInt32Ty(LLVMContext);
@@ -3150,7 +3151,8 @@ IRNode *GenIR::simpleFieldAddress(IRNode *BaseAddress,
// in unverifiable IL we may not have proper referent types and
// so may see what appear to be unrelated field accesses.
if (BaseObjStructTy->getNumElements() > FieldIndex) {
const DataLayout *DataLayout = JitContext->EE->getDataLayout();
const DataLayout *DataLayout =
const StructLayout *StructLayout =
const uint32_t FieldOffset = StructLayout->getElementOffset(FieldIndex);

0 comments on commit 47513ad

Please sign in to comment.
You can’t perform that action at this time.