Skip to content
Benchmark container build files for a variety of cloud-native benchmarks.
Shell Dockerfile Python Lua
Branch: master
Clone or download
pothos Merge pull request #25 from kinvolk/kai/plot-legend
cluster-script: Plot legend to reflect columns, group by metric, add summary plots, hardcode colors for our systems
Latest commit 577f102 Nov 11, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
ab ab: apache2-utils container for HTTP req/s benchmarks Nov 4, 2019
cluster-script cluster-script: Fix minmax bars Nov 11, 2019
dstat 3-stage-build: separate build for dstat Oct 18, 2019
fio Dockerfiles: add util-linux to benchmark images Oct 28, 2019
fortio fortio: Add fortio container for HTTP/gRPC latency benchmarks Nov 5, 2019
iperf3
memtier memtier: Add util-linux for lscpu binary Oct 31, 2019
nginx nginx: ngnix HTTP server container serving 404 on port 8000 Nov 4, 2019
perf perf/build...sh: remove error message coreos reference Oct 22, 2019
stress-ng Dockerfiles: add util-linux to benchmark images Oct 28, 2019
sysbench Dockerfiles: add util-linux to benchmark images Oct 28, 2019
wrk2-benchmark wrk2-benchmark: wrk2 container for HTTP req/s latency benchmarks Nov 7, 2019
README.md README: Add missing documentation Nov 8, 2019
build.sh

README.md

benchmark-containers

Repository Structure: This repository contains a benchmark suite for nodes in Kubernetes clusters. Corner stones are the benchmark containers which include the tooling that the Kubernetes deployments need to conduct the automated benchmarking. The cluster-script deploys the Kubernetes apps to configured clusters. Since multiple clusters can be benchmarked the script runs on a local machine or an external server and is not deployed on one of the benchmarked clusters.

Benchmark container build files

The top level build script and the container subfolders as explained below form the benchmark container build files for a variety of cloud-native system-level benchmarks. All containers are published as images for arm64 and amd64 on quay.io/kinvolk/CONTAINERNAME.

All containers listed below also ship version 0.8.0 of dstat. DStat's CSV output in particular is a nice way of streaming system metrics.

The containers are based on Alpine and thus use musl as libc instead of glibc.

Building

All benchmark containers ship dstat, which is built as a dependency. We provide a convenience build script for creating container images with this dependency: build.sh in the project root.

Call ./build.sh without arguments to build all containers in the project, or ./build.sh <container> [<container>] ... to build individual containers (e.g. ./build.sh fio to just build the Fio container).

Please note that the perf container currently needs to be built separately from ./build.sh as the perf build currently contains a number of workarounds.

memtier / memcached / redis container

The memtier container ships release 1.2.17 of Redis' memtier benchmark. The container is fully self-sufficient and also ships release 1.5.19 of memcached as well as release 5.0.6 of redis. Before running the benchmark, start redis and memcached:

* memcached -u benchmark -t 32
* su benchmark -c 'redis-server --port 7777'

Then run the benchmark, e.g.

* memtier_benchmark -s 127.0.0.1 -p 11211 -P memcache_binary -t 32 -n 10000 –ratio 1:1 -c 25 -x 10 -d 100 –key-pattern S:S

fio container

The FIO container ships release 3.15 of the Flexible IO tester.

stress-ng container

The container ships release 0.10.07 of stress-ng.

sysbench container

The container ships release 1.0.17 of sysbench. The sysbench build includes support for MariaDB / MySQL benchmarking as well as LUA scripting.

iperf3 container

The container ships release 3.7 of iperf.

perf container

The container ships the version of perf that belongs to the Flatcar kernel release the container is built on. Please note that the build is platform / release specific. Also, the build contains a number of workarounds compensating for transient shortcomings in the Flatcar SDK. Please note that the build currently needs to be executed on a Flatcar instance. We are working on improving the build.

nginx container

The container ships the NGINX web server in a configuration that serves 404 responses on port 8000. It works both as a root container or as user with the ID 1000.

ab container

The container ships ApacheBench from apache2-utils which measures the maximal requests/s that a web server can answer.

fortio container

The container ships fortio, a load testing tool that can measure latencies for a fixed number of requests/s both for HTTP and gRPC. It also works as server.

wrk2-benchmark container

The container ships a build of wrk2, a modification of wrk that reports correct tail latencies for HTTP requests by taking Coordinated Omission into account.

The /usr/local/share/wrk2/body-100-report.lua Lua script can be used to enhance the log with additional information and set the request body length to 100 bytes.

The container is named wrk2-benchmark to avoid a conflict with the other wrk2 container build in quay.io.

Kubernetes cluster benchmark script

The cluster-script folder contains scripts to run the benchmark containers on Kubernetes clusters and compare the results.

The run-for-list.sh script runs the benchmark for all clusters provided in a configuration file. The configuration contains the kubeconfig path, CPU architecture of the benchmarked node, the costs per hour, metadata (e.g., the cloud region), the number of benchmark iterations, and finally the name of the benchmarked node, and the name of a fixed x86 node. The fixed x86 node is used as network client and should be the same hardware type across all tested clusters to get comparable results from the point of an external network client.

The benchmark.sh script works on a single cluster per invocation. It covers automatic creation of K8s Jobs for sysbench cpu and memory benchmarks, gathering the results as CSV files and plotting them together with previous results as graphs for comparison.

The benchmark results are stored in the cluster as long as the jobs are not cleaned-up. The gather process exports them to local files and combines the result with any existing local files. Therefore, the intended usage is to gather the results for various clusters into one directory. By keeping the cleanup process of Jobs in the cluster optional, multiple clients can access the results without sharing the CSV files. Old results can be cleaned-up to speed up the gathering and they are included in the plotted graphs as long as their CSV files are still in the current folder.

Note: It would be possible to deploy the benchmark script itself to the cluster if you use one large hybrid cluster that contains all the nodes you want to benchmark. Yet, the script does not assume this right now and will need some minor tweaks for that to work (to controll the parallelism), and a nice web UI to start the benchmark and show the results.

Example Usage

We provision three Lokomotive Kubernetes clusters on Packet. The xeon cluster gets a worker pool intel with an Intel XEON node, the epyc cluster gets a worker pool amd with an AMD EPYC worker node, and the emag cluster gets a worker pool arm with an Ampere eMAG worker node. In addition, all three clusters get another worker pool x86 with an Intel XEON worker node as unified network client.

After provisioning the clusters and noting the worker node names we save this information to a file list as in the following lines. The format is KUBECONFIG,ARCH,COST,META,ITERATIONS,BENCHMARKNODE,FIXEDX86NODE. META here refers to the Packet region, and COSTS is 1$/hour for Ampere eMAG and AMD EPYC but 2$/hour for Intel XEON benchmark worker nodes.

…/packet-lokomotive-intel/assets/auth/kubeconfig,amd64,2.0,ewr1,10,k-xeon-intel-worker-0,k-xeon-x86-worker-0
…/packet-lokomotive-amd/assets/auth/kubeconfig,amd64,1.0,ewr1,10,k-epyc-amd-worker-0,k-epyc-x86-worker-0
…/packet-lokomotive-arm/assets/auth/kubeconfig,arm64,1.0,dfw2,10,k-emag-arm-worker-0,k-emag-x86-worker-0

That list file serves as input for the run-for-list.sh script that runs the benchmark.sh script for each cluster with the environment variables set up according to the cluster line.

We run the script with the arguments benchmark to run a benchmark, and gather to store the output as local CSV files and creates SVG and PNG graphs in the current directory:

…/kinvolk/benchmark-containers/cluster-script/run-for-list.sh input benchmark+gather
[… prints which benchmarks are run in parallel for all clusters]

After roughly 3 hours all are done. You can look for the status via KUBECONFIG=the/path/to/kubeconfig kubectl -n benchmark get pods -o wide (or get jobs). You can observe the output of a run via KUBECONFIG=… kubectl -n benchmark logs PODNAME.

To not run all benchmarks but only certain ones of a category, specify them via environment variables:

NETWORK="iperf3 ab" MEMTIER="memcached" STRESSNG=" " /kinvolk/benchmark-containers/cluster-script/run-for-list.sh input benchmark+gather

In this example the first variable means that only the iperf3 and ab benchmarks are run (but not wrk2 and fortio) from the NETWORK category. The second variable means that only the memcached benchmark is run (but not redis) from the MEMTIER category. The third variable means that no benchmark of the STRESSNG category is run (available are spawn hsearch crypt atomic tsearch qsort shm sem lsearch bsearch vecmath matrix memcpy). A missing fourth variable SYSBENCH could be used to specify which benchmarks are run from that category, but since it's not specified the default fileio mem cpu is used.

When a second user run the same command with only the gather parameter, the user can download the benchmark results as well.

A final cleanup parameter removes all jobs and the whole benchmark namespace and leaves no trace on the clusters. This can be combined in one execution via the benchmark+gather+cleanup parameter.

You can’t perform that action at this time.