Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
The jit will be a tracing jit, not a method jit.
dynasm is currently the easiest, as it allows more archs, moar already uses it. But you have to write your insn manually, not abstract as in libjit or asmjit. but it supports other more important abstraction, like types and slots, ...
See e.g. https://github.com/imasahiro/rujit/ for a memory hungry tracing jit, 2-3x faster.
We also need the jit for the ffi, so we can omit libffi, and just go with the jit.
But first we will start with a very simple method jit in LLVM, to benchmark the cost/benefit ratio for the simple
linearization, and do the simple and easiest op optimizations at first. esp. nextstate which is currently the most costly op, esp. unneeded stack reset on every single line. The jit knows the stack depth for most simple ops, and can easily bypass that (#18). The jit also knows about locals and tainted vars.
Then we can start counting calls and loops, and switch between the jit and bytecode runloop, if beneficial. The question is if the LLVM optimizer can inline the ops, or if it needs the IR of it. e.g. unladen_swallow needed to compile a complete libpython.bc runtime, and still needed a huge and slow LLVM abstraction library to emit the IR.