@@ -227,34 +227,33 @@ These two operations, ``add`` and ``emit``, together constitute the layer
227227concept: A layer is a way to wrap a part of a compiler pipeline (in this case
228228the "opt" phase of an LLVM compiler) whose API is opaque to ORC with an
229229interface that ORC can call as needed. The add method takes an
230- module in some input program representation (in this case an LLVM IR module) and
231- stores it in the target JITDylib, arranging for it to be passed back to the
232- Layer's emit method when any symbol defined by that module is requested. Layers
233- can compose neatly by calling the 'emit' method of a base layer to complete
234- their work. For example, in this tutorial our IRTransformLayer calls through to
235- our IRCompileLayer to compile the transformed IR, and our IRCompileLayer in turn
236- calls our ObjectLayer to link the object file produced by our compiler.
237-
238-
239- So far we have learned how to optimize and compile our LLVM IR, but we have not
240- focused on when compilation happens. Our current REPL is eager: Each function
241- definition is optimized and compiled as soon as it is referenced by any other
242- code, regardless of whether it is ever called at runtime. In the next chapter we
243- will introduce fully lazy compilation, in which functions are not compiled until
244- they are first called at run-time. At this point the trade-offs get much more
245- interesting: the lazier we are, the quicker we can start executing the first
246- function, but the more often we will have to pause to compile newly encountered
247- functions. If we only code-gen lazily, but optimize eagerly, we will have a
248- longer startup time (as everything is optimized) but relatively short pauses as
249- each function just passes through code-gen. If we both optimize and code-gen
250- lazily we can start executing the first function more quickly, but we will have
251- longer pauses as each function has to be both optimized and code-gen'd when it
252- is first executed. Things become even more interesting if we consider
253- interprocedural optimizations like inlining, which must be performed eagerly.
254- These are complex trade-offs, and there is no one-size-fits all solution to
255- them, but by providing composable layers we leave the decisions to the person
256- implementing the JIT, and make it easy for them to experiment with different
257- configurations.
230+ module in some input program representation (in this case an LLVM IR module)
231+ and stores it in the target ``JITDylib ``, arranging for it to be passed back
232+ to the layer's emit method when any symbol defined by that module is requested.
233+ Each layer can complete its own work by calling the ``emit `` method of its base
234+ layer. For example, in this tutorial our IRTransformLayer calls through to
235+ our IRCompileLayer to compile the transformed IR, and our IRCompileLayer in
236+ turn calls our ObjectLayer to link the object file produced by our compiler.
237+
238+ So far we have learned how to optimize and compile our LLVM IR, but we have
239+ not focused on when compilation happens. Our current REPL optimizes and
240+ compiles each function as soon as it is referenced by any other code,
241+ regardless of whether it is ever called at runtime. In the next chapter we
242+ will introduce a fully lazy compilation, in which functions are not compiled
243+ until they are first called at run-time. At this point the trade-offs get much
244+ more interesting: the lazier we are, the quicker we can start executing the
245+ first function, but the more often we will have to pause to compile newly
246+ encountered functions. If we only code-gen lazily, but optimize eagerly, we
247+ will have a longer startup time (as everything is optimized at that time) but
248+ relatively short pauses as each function just passes through code-gen. If we
249+ both optimize and code-gen lazily we can start executing the first function
250+ more quickly, but we will have longer pauses as each function has to be both
251+ optimized and code-gen'd when it is first executed. Things become even more
252+ interesting if we consider interprocedural optimizations like inlining, which
253+ must be performed eagerly. These are complex trade-offs, and there is no
254+ one-size-fits all solution to them, but by providing composable layers we leave
255+ the decisions to the person implementing the JIT, and make it easy for them to
256+ experiment with different configurations.
258257
259258`Next: Adding Per-function Lazy Compilation <BuildingAJIT3.html >`_
260259
0 commit comments