Skip to content
Branch: master
Go to file
Code

Latest commit

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
bin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

SoftDev Warmup Experiment

This is the main repository for the Software Development Team Warmup Experiment as detailed in the paper "Virtual Machine Warmup Blows Hot and Cold", by Edd Barrett, Carl Friedrich Bolz, Rebecca Killick, Sarah Mount and Laurence Tratt.

The paper is available here

Running the warmup experiment

The script build.sh will fetch and build the VMs and the Krun benchmarking system. Once the VMs are built the Makefile contains a target bench-with-reboots which will run the experiment in full, however, you should consult the Krun documentation (fetched into krun/ by build.sh), as there is a great deal of manual intervention needed to compile a tickless kernel, disable Intel P-states, set up rc.local etc.

Note that the experiment is designed to run on amd64 machines running Debian 8 or OpenBSD. Newer versions of Debian do not currently work due to a C++ ABI bump which would require a newer C++ compiler (a newer GCC or perhaps clang).

Calling build.sh will also install our warmup_stats code, which includes a number of scripts to format benchmark results as plots or tables (similar to those seen in the paper), and diff between results files. warmup_stats has a number of dependencies, some of which are also needed by the code in this repository, in particular:

  • Python 2.7 - the code here is not Python 3.x ready
  • bzip2 / bunzip2 and bzip2 (including header files)
  • curl (including header files)
  • gcc and make
  • liblzma library (including header files)
  • Python modules: numpy, pip, libcap
  • openssl (including header files)
  • pkg-config
  • pcre library (including header files)
  • readline (including header files)
  • wget

The install instructions for warmup_stats contain more details.

Print-traced Benchmarks

The paper mentions that to ensure benchmarks are "AST deterministic", we instrumented them with print statements. These versions can be found alongside the "proper" benchmarks under the benchmarks/ directory.

For example under benchmarks/fasta/lua/:

  • bench.lua is the un-instrumented benchmark used in the proper experiment.
  • trace_bench.lua is the instrumented version.

Special notes:

  • Java benchmarks have and additional trace_KrunEntry.java file as well.
  • Since we cannot distribute Java Richards, a patch is required to derive the tracing version (patches/trace_java_richards.diff)
You can’t perform that action at this time.