Skip to content

Adding a new benchmark and implementation to an existing suite

HF edited this page Mar 23, 2016 · 23 revisions

This page explains how to add a new benchmark and implementation(s) to an existing benchmark suite, such as the Ostrich benchmark suite. It uses the recursive Fibonacci algorithm to illustrate the process.

Create a directory for the benchmark

Choose a descriptive short name to refer to the benchmark and create a directory with that name under the suite's 'benchmarks' directory :

    mkdir benchmarks/fib
    cd benchmarks/fib

Create a 'benchmark.json' description file for the benchmark in the benchmark/[benchmark name] folder

The 'benchmark.json' (in this case it will be in the fib folder) description file should follow the benchmark description format. Here is a working example for Fibonacci:

    {
       "type": "benchmark",
       "name":"N-th Fibonacci Sequence Number Recursive Computation",
       "short-name":"fib",
       "description":"Computes the N-th number of the Fibonacci sequence using the recursive algorithm.",
       "version":"0.1.0",
       "input-size":{
          "small":10,
          "medium":40,
          "large":45
       }
    }

Create a C language implementation

Create the 'implementations' directory that will contain one (or more) language implementations:

    mkdir implementations
    cd implementations

Create a 'c' folder for the C implementation:

    mkdir c
    cd c

Create a 'fib.c' file for the core computation. The core computation is the compute-intensive section of the algorithm whose execution time we want to measure. Here is a sample C implementation:

    int fib(int x) {
        if (x < 2) return x;
        return fib(x-1) + fib(x-2);
    }

Create a 'runner.c' runner file in the ../implementations/c folder for the benchmark. The runner handles the parsing of command-line arguments and any input-output that is outside of the core computation we want to measure:

    #include <stdlib.h>
    #include <stdio.h>
    #include <sys/time.h>
    #include <string.h>

    typedef struct __stopwatch_t{
        struct timeval begin;
        struct timeval end;
    }stopwatch;

    void stopwatch_start(stopwatch *sw){
        if (sw == NULL)
            return;

        memset(&sw->begin, 0, sizeof(struct timeval));
        memset(&sw->end  , 0, sizeof(struct timeval));

        gettimeofday(&sw->begin, NULL);
    }

    void stopwatch_stop(stopwatch *sw){
        if (sw == NULL)
            return;

        gettimeofday(&sw->end, NULL);
    }

    double
    get_interval_by_sec(stopwatch *sw){
        if (sw == NULL)
            return 0;
        return ((double)(sw->end.tv_sec-sw->begin.tv_sec)+(double)(sw->end.tv_usec-sw->begin.tv_usec)/1000000);
    }

    extern int fib(int x);

    int main(int argc, char* argv[]) {
        stopwatch sw;

        if (argc < 2) {
            printf("usage: %s x\n", argv[0]);
            return 0;
        }
        
        int n = atoi(argv[1]); 
        stopwatch_start(&sw);
        int x = fib(n);
        stopwatch_stop(&sw);
        printf("{ \"time\": %f, \"output\": %d }\n", get_interval_by_sec(&sw), x);
    }

Create an 'implementation.json' in the benchmark c directory description file:

    {
        "type": "implementation",
        "short-name":"c",
        "description":"Reference C implementation",
        "language":"c",
        "core-source-files":[
           { "file": "./fib.c"}
        ],
        "runner-source-file": { "file": "./runner.c" },
        "runner-arguments": [
            { "expand": "/experiment/input-size" }
        ]
    }

Check that the benchmark and implementation JSON description files are valid:

    wu list
    -> 'fib' should be in the list of benchmarks
    -> 'c' should be in the list of implementations

Build the c implementation of the benchmark:

    wu check fib c gcc --build-config
    -> should list a compilation command for gcc that has no '<unresolved ...>' element

    wu build -v fib c
    -> Note the builds/<id> value

Run the build generated:

    wu check fib c gcc --run-config
    -> should list a run command that uses the 'native' execution environment,
    -> then the builds/<id>/runner generated previously,
    -> and all subsequent elements should be resolved (no '<unresolved ...>' element in the command)
    
    wu run -v fib c gcc
    -> should output a running time of ~1s and an output of '102334155'

(Optional) Split core computation from I/O and input preparation

(Optional) Create an output checker script for correctness

TODO

Create a JavaScript implementation

TODO

Table of Contents

  • Introduction
    • Terminology | [Conventions](Benchmark Organization Conventions)
  • Installation
  • [Configuration](Add local platform information)
  • Contribution
    • [Add a new benchmark](Adding a new benchmark and implementation to an existing suite)
    • [Configure implementation.json](Adding multiple parameters in implementation.json)
    • [Add a new compiler](Adding a new compiler) // To-do
    • [Add a new implementation](Adding a new implementation) // To-do
  • Development // To-do
    • [Work with commands](Work with commands)
    • [Customize jobs](Customize jobs)
Clone this wiki locally