Skip to content

Commit

Permalink
Merge pull request snabbco#89 from lukego/readme-redux
Browse files Browse the repository at this point in the history
README.md: New introduction
  • Loading branch information
lukego committed Aug 21, 2017
2 parents 46f00a4 + 86f23d6 commit 53e93f1
Showing 1 changed file with 31 additions and 48 deletions.
79 changes: 31 additions & 48 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,48 +2,31 @@

[![Build Status](https://travis-ci.org/raptorjit/raptorjit.svg?branch=master)](https://travis-ci.org/raptorjit/raptorjit)

**RaptorJIT** is a fork of LuaJIT targeting Linux/x86-64 server applications.
**RaptorJIT** is a fork of LuaJIT focused on _predictably high performance_.

Initial changes (ongoing work):
Making performance predictable for application developers brings new requirements:

- Support only Linux/x86-64 with `+JIT +FFI +GC64 +NO_UNWIND` and `-GDBJIT
-PERFTOOLS -VMEVENT -PROFILE` and otherwise canonical settings.
Remove the ~50,000 lines of code for other architectures, operating
systems, and features.
- Remove features that are not the right thing for my use cases:
built-in disassemblers, `jit.v`, `jit.p`, `jit.dump`, and the VM
features required to support them. These need to be replaced with
simple and low-overhead mechanisms that provide data to be analyzed
be external tools. (Tools will be developed in
the [Studio](https://github.com/studio) project.)
- Continuous Integration testing
with [Travis-CI](https://travis-ci.org/raptorjit/raptorjit) and
with [automatic performance regression tests](https://hydra.snabb.co/job/luajit/branchmarks/benchmarkResults/latest/download/2).
- Minimizing the performance impact of non-deterministic JIT decisions.
- Providing an accurate mental model of how the JIT works and which programming techniques are effective.
- Providing diagnostic tools ([Studio](https://hydra.snabb.co/job/lukego/studio-manual/studio-manual-html/latest/download-by-type/file/Manual#view-hot-traces)) consistent with this mental model to make the actual operation transparent.
- Making profiling completely ubiquitous in development, testing, and production environments.

PRs welcome! Shock me with your radical ideas! :-)
The development process has to support moving quickly in these directions:

### CPU support
- Quality assurance based on repeatable standard benchmarks executed by CI.
- Streamlined codebase: x86-64 architecture, 64-bit heap (GC64), "no `#ifdef`."
- Distributed development ("Linux-style") with many maintainers, forks, and merges.

RaptorJIT is narrowly focused:

- [Intel Core](https://en.wikipedia.org/wiki/Intel_Core) (i3/i5/i7/Xeon-E) is the only supported CPU family.
- The latest microarchitecture (currently Skylake) is targetted for all new optimizations.
- The previous microarchitecture (currently Haswell) is supported without regressions.
- Older microarchitectures (currently Sandy Bridge and Nehalem) are not supported at all.

We are flexible during the transitions between processor generations
e.g. when the latest microarchitecture is not readily available in all
product families.

Forks focused on other CPU families (Atom, Xeon Phi, AMD, VIA, etc)
are encouraged and may be merged in the future.
Once these requirements have been thoroughly satisfied then new
requirements can be introduced. For example, ARM64 and other platforms
can be supported as the project matures.

### Performance

RaptorJIT takes a quantitive approach to performance. The value of an
optimization must be demonstrated by a reproducible benchmark.
Optimizations that are not demonstrably beneficial for the currently
supported CPUs are removed.
optimization must be demonstrated with a reproducible benchmark.
Optimizations that are not demonstrably beneficial on recent CPU
generations are removed.

This makes the following classes of pull requests very welcome:

Expand All @@ -53,21 +36,6 @@ This makes the following classes of pull requests very welcome:

The CI benchmark suite will evolve over time starting from the [standard LuaJIT benchmarks](https://hydra.snabb.co/job/luajit/branchmarks/benchmarkResults/latest/download/2) (already covers RaptorJIT) and the [Snabb end-to-end benchmark suite](https://hydra.snabb.co/job/snabb-new-tests/benchmarks-murren-large/benchmark-reports.report-full-matrix/latest/download/2) (must be updated to cover RaptorJIT.)

### Optimization resources

These are the authoritative optimization resources for processors
supported by RaptorJIT. If you are confused by references to CPU
details in discussions then these are the places to look for answers.

- [Computer Architecture: A Quantitiave Approach](https://www.amazon.com/Computer-Architecture-Fifth-Quantitative-Approach/dp/012383872X) by Hennessy and Patterson.
- [Intel Architectures Optimization Reference Manual](http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-optimization-manual.html).
- Agner Fog's [software optimization resources](http://www.agner.org/optimize/):
- [Instruction latency and throughput tables](http://www.agner.org/optimize/instruction_tables.pdf).
- [Microarchitecture of Intel, AMD, and VIA CPUs](http://www.agner.org/optimize/microarchitecture.pdf).
- [Optimizing subroutines in assembly language for x86](http://www.agner.org/optimize/optimizing_assembly.pdf).

The [AnandTech review of the Haswell microarchitecture](http://www.anandtech.com/show/6355/intels-haswell-architecture) is also excellent lighter reading.

### Compilation for users

Simple build:
Expand Down Expand Up @@ -165,6 +133,21 @@ as [Hydra](https://nixos.org/hydra/) then the tests can be
automatically parallelized and distributed across a suitable build
farm.

### Optimization resources

These are the authoritative optimization resources for processors
supported by RaptorJIT. If you are confused by references to CPU
details in discussions then these are the places to look for answers.

- [Computer Architecture: A Quantitiave Approach](https://www.amazon.com/Computer-Architecture-Fifth-Quantitative-Approach/dp/012383872X) by Hennessy and Patterson.
- [Intel Architectures Optimization Reference Manual](http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-optimization-manual.html).
- Agner Fog's [software optimization resources](http://www.agner.org/optimize/):
- [Instruction latency and throughput tables](http://www.agner.org/optimize/instruction_tables.pdf).
- [Microarchitecture of Intel, AMD, and VIA CPUs](http://www.agner.org/optimize/microarchitecture.pdf).
- [Optimizing subroutines in assembly language for x86](http://www.agner.org/optimize/optimizing_assembly.pdf).

The [AnandTech review of the Haswell microarchitecture](http://www.anandtech.com/show/6355/intels-haswell-architecture) is also excellent lighter reading.

### Quotes

Here are some borrowed words to put this branch into context:
Expand Down

0 comments on commit 53e93f1

Please sign in to comment.