Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Latest commit 631ea8a Oct 23, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
bin minor now0 -> now variable fix Oct 23, 2018
pom.xml fixed export Jan 13, 2018

Yardstick Logo

Yardstick - Benchmarking Framework

Yardstick is a framework for writing benchmarks. Specifically it helps with writing benchmarks for clustered or otherwise distributed systems.

The framework comes with a default set of probes that collect various metrics during benchmark execution. Probes can be turned on or off in configuration. You can use a probe for measuring throughput and latency, or a probe that gathers vmstat statistics, etc... At the end of benchmark execution, Yardstick automatically produces files with probe points.

GridGain Benchmarks On Yardstick

See Yardstick GridGain Benchmarks as an example of Yardstick framework usage.


Generated Graph

Click on individual graphs to Zoom In

Generated Graph

Available Probes

  1. ThroughputLatencyProbe - measures throughput and latency
  2. DStatProbe - collects information provided by Linux/Unix ‘dstat’ command, such as various network, CPU, or memory metrics
  3. VmStatProbe - collects information provided by Linux/Unix ‘vmstat’ command (which is a subset of ‘dstat’ command), such as various network, CPU, or memory metrics
  4. PercentileProbe - tracks the latency of each individual request and collects the time frame bucket

Creating Yardstick Benchmarks

There are two main interfaces that need to be implemented, BenchmarkServer and BenchmarkDriver:

  • BenchmarkDriver is an instance of the benchmark that performs some operation that needs to be tested.
  • BenchmarkServer is the remote server that the BenchmarkDriver communicates with.

You can benchmark any distributed operation with Yardstick. For example, if you have to measure message processing time in your application, then you can put message sending logic into BenchmarkDriver, and message processing logic to one or more remote BenchmarkServers.

It is as simple as this. Yardstick will measure throughput, latency, and other metrics for you automatically and produce nice graphs at the end.


You can find Yardstick benchmark examples in

Running Yardstick Benchmarks

The easiest way to run benchmarks is by executing bin/ script which will automatically start benchmark driver and remote servers base based on the properties file passed in (config/ used by default):

$ bin/ config/

This script will automatically restart benchmark driver and remote servers for every benchmark configuration provided in config/ file.

At the end of the run, you can generate graphs by executing bin/ script with folders that contain benchmark results.

$ bin/ -i results_2014-05-16_00-28-01 results_2014-05-15_18-38-14

Starting Remote Servers

If you do not wish to run bin/ script and prefer to have more control over starting and stopping remote servers, you can use script directly.

$ bin/ config/

Server Log Files are stored in the logs-<current time>/logs_servers folder.

Starting Benchmark Driver

Again, if you do not wish to run bin/ script, you can start benchmark driver directly by executing script.

$ bin/ config/

Driver Log Files are stored in the logs-<current time>/logs_drivers folder.

Stopping Remote Servers

To stop remote servers after the benchmark is finished, you can execute script.

$ bin/ config/

Manual Running

There are scripts that allow to run servers and drivers on local machine only, no SSH commands are used to start remote servers or drivers.

To start servers you can execute script. Optionally the number of servers parameter can be passed to this script. If it's not specified then the number of servers will be equal to the number of server hosts (SERVER_HOSTS property in properties file).

$ bin/ config/

To start drivers you can execute script.

$ bin/ config/

Servers and drivers are stopped manually, for example by kill command.

Windows version of manual scripts is shipped as well: benchmark-manual-servers-start.bat and benchmark-manual-drivers-start.bat batch scripts.

Properties And Command Line Arguments

The following properties can be defined in benchmark properties file:

  • BENCHMARK_DEFAULT_PROBES - list of default probes
  • BENCHMARK_PACKAGES - packages where the specified benchmark is searched by reflection mechanism
  • BENCHMARK_WRITER - probe point writer class name (by default CSV writer is used)
  • SERVER_HOSTS - comma-separated list of IP addresses where servers should be started, one server per host
  • DRIVER_HOSTS - comma-separated list of IP addresses where drivers should be started, one driver per host, if the property is not defined then the driver will be run on localhost
  • REMOTE_USER - SSH user for logging in to remote hosts
  • JVM_OPTS - list of general JVM options used to start both server and driver node
  • SERVER_JVM_OPTS - list of JVM options used to start server node (appended to JVM_OPTS)
  • DRIVER_JVM_OPTS - list of JVM options used to start driver node (appended to JVM_OPTS)
  • RESTART_SERVERS - there is two modes to use it. 1. RESTART_SERVERS=true - yardstick will start new servers for each benchmark. 2. RESTART_SERVERS=<hostname_1>:<id_1>:<delay_1>:<pause_1>:<period_1>,<hostname_2>:<id_2>:<delay_2>:<pause_2>:<period_2> - comma-separated list of colon-separated tuples of a hostname, a server host id (id - number of hostname at SERVER_HOSTS from 0), a delay (for warmup) before first restart (stop, then pause, then start), the pause time (pause between server kill and server start) and a period for next restarts. The delay, the pause and the period in seconds (Note: it can be 0.1 seconds for example). Please pay attention that restarters warmup delay is an absolute time from benchmark start, it does not depends of benchmark's warmup delay. In this case, yardstick will start new servers for each benchmark (the same with RESTART_SERVERS=true), and after starting of driver, yardstick will killing server by the hostname according to the delay, the pause and the period time.
  • CONFIGS - comma-separated list of benchmark run configurations which are passed to the servers and to the benchmarks

Example of file to run 2 instances of EchoServer

# List of default probes.

# Packages where the specified benchmark is searched by reflection mechanism.

# Probe point writer class name.

# General JVM options.

# Server JVM options.

# Driver JVM options.

# Comma-separated list of remote hosts to run BenchmarkServers on.
# If same host is specified multiple times, then benchmark server will be started on that host multiple times.

# Comma-separated list of remote hosts to run BenchmarkDrivers on.
# If same host is specified multiple times, then benchmark driver will be started on that host multiple times.

# Remote username.

# Comma-separated list of benchmark driver and server configuration parameters.
--localBind localhost --duration 30 -t 2 -sn EchoServer -dn EchoBenchmark,\
--localBind localhost --duration 30 -t 4 -sn EchoServer -dn EchoBenchmark\

The following properties can be defined in the benchmark configuration:

  • -cfg <path> or --config <path> - framework configuration file path
  • -dn <list> or --driverNames <list> - space-separated list of driver names (required for the driver), the specified drivers will be run in one JVM, optionally a weight can be added to the driver name, for example EchoBenchmark:3 NewEchoBenchmark:7, so EchoBenchmark will be run 30% of benchmark time, NewEchoBenchmark will be run 70%
  • -sn <name> or --serverName <name> - server name (required for the server)
  • -p <list> or --packages <list> - comma separated list of packages for benchmarks
  • -pr <list> or --probes <list> - comma separated list of probes for benchmarks
  • -wr <name> or --writer <name> - probe point writer class name
  • -t <num> or --threads <num> - thread count (set to 'cpus * 2')
  • -d <time> or --duration <time> - test duration, in seconds
  • -w <time> or --warmup <time> - warmup time, in seconds
  • -sh or --shutdown - flag indicating whether to invoke shutdown hook or not
  • -of <path> or --outputFolder <path> - output folder for benchmark results, current folder is used by default
  • -ds <list> or --descriptions <list> - space-separated list of benchmark run descriptions, the description with index 1 corresponds to the driver with index 1 and so on
  • -hn <name> or --hostName <name> - host name where a benchmark driver is run, this property is set automatically by the benchmark scripts

For example if we need to run EchoServer server on localhost and EchoServerBenchmark benchmark on localhost, the test should be 20 seconds then the following configuration should be specified in run properties file:

  • SERVER_HOSTS=localhost
  • CONFIGS="--duration 20 -sn EchoServer -dn EchoServerBenchmark"

JFreeChart Graphs

Yardstick goes with the script that builds JFreeChart graphs using probe points. script accepts the following arguments:

  • -i <list> or --inputFolders <list> - space separated list of input folders which contains folders with probe results files (required)
  • -cc <num> or --chartColumns <num> - number of columns that the charts are displayed in on the resulted page
  • -gm <mode> or --generationMode <mode> - mode that defines the way how different benchmark runs are compared with each other
  • -sm <mode> or --summaryMode <mode> - mode that defines whether a summary plot is added to a graph or not. It's useful to add summary plots when two or more drivers are run in one JVM (driver names that defined via --driverNames configuration property) or when two or more drivers are run on multiple hosts (DRIVER_HOSTS property in properties file). In these cases the plots of ThroughputLatencyProbe or PercentileProbe probes from multiple drivers can be replaced with one summary plot.

Generation modes:

  • STANDARD - All benchmark results are displayed on separate graphs. Graphs are generated in the benchmark run folder.
   bin/ -gm STANDARD -i results_2014-05-20_03-19-21
  • COMPARISON - Benchmarks from multiple folders are paired together. In this mode 2 or more results folders are compared in such way that benchmark 1 from result 1 will be compared with benchmark 1 from result 2, same for benchmark 2, 3 and so on.
   bin/ -gm COMPARISON -i results_2014-05-20_03-19-21 results_2014-05-20_03-20-35
  • COMPOUND - Benchmarks from multiple folders (space separated) are shown together on the same graph. This is default mode.
   bin/ -gm COMPOUND -i results_2014-05-20_03-19-21 results_2014-05-20_03-20-35

Summary modes:

  • SUM_ONLY - Summary plot is added to a graph. This is default mode.
   bin/ -sm SUM_ONLY -i results_2014-05-20_03-19-21
  • INDIVIDUAL_ONLY - Individual plots are displayed.
   bin/ -sm INDIVIDUAL_ONLY -i results_2014-05-20_03-19-21 results_2014-05-20_03-20-35
  • INDIVIDUAL_AND_SUM - Individual and summary plots are displayed.
   bin/ -sm INDIVIDUAL_AND_SUM -i results_2014-05-20_03-19-21 results_2014-05-20_03-20-35

Maven Install

The easiest way to get started with Yardstick in your project is to use Maven dependency management:


Yardstick is shipped with scripts that run servers and drivers, these scripts can be used for your benchmarks. In order to have them, just unzip maven artifact. Also this can be done by copying and pasting the following code snippet to your benchmark project POM file (see how it's done in Yardstick GridGain).



Use GitHub issues to file bugs.


Yardstick is available under Apache 2.0 Open Source license.

You can’t perform that action at this time.