CHR Benchmark Suite
Compare the execution runtimes of different implementations of Constraint Handling Rules.
The following systems with CHR support are compared. With CHR(X) we denote a CHR system as extension of language X.
The test cases are based on the paper "CCHR: the fastest CHR Implementation, in C." by Wuille, Schrijvers and Demoen (2007). The example problems are:
- fib: The bottom-up calculation of the Fibonacci numbers.
- gcd: The calculation of the greatest common divisor of two integers, based on the subtraction-based Euclidean algorithm.
- leq: A constraint solver for less-equal constraints between variables.
- primes: An implementation of the Sieve of Eratosthenes to generate prime numbers.
- ram: A simulator of a Random Access Machine.
- tak: Implementation of the Takeuchi function.
Due to different features of the examined CHR systems not all tests have been implemented for all systems.
Makefile contains a large number of targets. For every system (
chrjs and native
c) there are several sub-targets, in particular:
system.preinstall: Installation of dependencies and benchmark setup, for example the creation of temporary directories.
system.install: Installs the actual system.
system.prepare: Preparation tasks for the benchmarks. Usually this includes the compilation of the test source files, for example compile
*.jchrfiles for JCHR.
system.clean: Target to delete temporary directories and files. This should be called before the benchmark is executed.
system.test: Runs each test once to check if it is executable. This will generally create no output. The tests have been passed if no error occurs.
system.bench: Executes the benchmarks for this system.
Apart from these tasks there are further, system dependent sub-targets, for example to benchmark only a single system and single test case.
To install all given systems and prepare the benchmarks, run these two commands:
$ sudo make install $ sudo make prepare
To start the benchmarks simply call
$ make bench ./bench.pl ## bench=leq ### sys=swi swi/leq:1 (0,0.00186556577682495)*1696 exp swi/leq:2 (0,0.00171436494731004)*1803 exp swi/leq:3 (4.44119496322037e-05,0.00171436494731004)*1766 exp ...
There is no order in which systems and problems are tested, so
leq/swi does not need to be the first.
Because the benchmark for all systems will due several hours,
make bench.save should be preferred. It will create a
bench.out file with the benchmark results.
Plot Benchmark Results
bench.out file can be used to create a plot with the benchmark results:
This will create the
/plots directory containing a PDF for each problem, for example