Skip to content

Commit

Permalink
Add microbenchmarking infrastructure
Browse files Browse the repository at this point in the history
With this commit we add a benchmarks project that contains the necessary build
infrastructure and an example benchmark. It is added as a separate project to avoid
interfering with the regular build too much (especially sanity checks) and to keep
the microbenchmarks isolated.

Microbenchmarks are generated with `gradle :benchmarks:jmhJar` and can be run with
` gradle :benchmarks:jmh`.

We intentionally do not use the
[jmh-gradle-plugin](https://github.com/melix/jmh-gradle-plugin) as it causes all
sorts of problems (dependencies are not properly excluded, not all JMH parameters
can be set) and it adds another abstraction layer that is not needed.

Closes elastic#18242
  • Loading branch information
danielmitterdorfer committed Jun 15, 2016
1 parent a2ad5c0 commit 5bd2f55
Show file tree
Hide file tree
Showing 9 changed files with 545 additions and 0 deletions.
63 changes: 63 additions & 0 deletions benchmarks/README.md
@@ -0,0 +1,63 @@
# Elasticsearch Microbenchmark Suite

This directory contains the microbenchmark suite of Elasticsearch. It relies on [JMH](http://openjdk.java.net/projects/code-tools/jmh/).

## Purpose

We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended for performance-critical components to spot performance
regressions. The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.

## Getting Started

Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.

## Running Microbenchmarks

Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.
```
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have any other special requirements
generate the uberjar with `gradle :benchmarks:jmhJar` and run the it directly with:
```
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
```
JMH supports lots of command line parameters. Add `-h` to the command above for more information about the available command line options.
## Adding Microbenchmarks
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.
## Tips and Best Practices
To get realistic results, you should exercise care when running your benchmarks. Here are a few tips:
### Do
* Ensure that the system executing your microbenchmarks has as little load as possible and shutdown every process that can cause unnecessary
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
* Ensure to run enough warmup iterations to get into a stable state. If you are unsure, don't change the defaults.
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
`performance` CPU governor.
* Vary problem input size with `@Param`.
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
** Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
your results. If so, try to force a GC between runs (`-gc true`).
** Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
* Have your benchmarks peer-reviewed.
### Don't
* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.e. with `-prof perfasm`.
* Run run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
96 changes: 96 additions & 0 deletions benchmarks/build.gradle
@@ -0,0 +1,96 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

buildscript {
repositories {
maven {
url 'https://plugins.gradle.org/m2/'
}
}
dependencies {
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3'
}
}

apply plugin: 'elasticsearch.build'
// build an uberjar with all benchmarks
apply plugin: 'com.github.johnrengelman.shadow'
// have the shadow plugin provide the runShadow task
apply plugin: 'application'

archivesBaseName = 'elasticsearch-benchmarks'
mainClassName = 'org.openjdk.jmh.Main'

// never try to invoke tests on the benchmark project - there aren't any
check.dependsOn.remove(test)
// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
task test(type: Test, overwrite: true)

dependencies {
compile("org.elasticsearch:elasticsearch:${version}") {
// JMH ships with the conflicting version 4.6 (JMH will not update this dependency as it is Java 6 compatible and joptsimple is one
// of the most recent compatible version). This prevents us from using jopt-simple in benchmarks (which should be ok) but allows us
// to invoke the JMH uberjar as usual.
exclude group: 'net.sf.jopt-simple', module: 'jopt-simple'
}
compile "org.openjdk.jmh:jmh-core:$versions.jmh"
compile "org.openjdk.jmh:jmh-generator-annprocess:$versions.jmh"
//TODO: Transitive dependencies of JMH. Add them here explicitly for the time being and find out why they are not included implicitly.
runtime 'net.sf.jopt-simple:jopt-simple:4.6'
runtime 'org.apache.commons:commons-math3:3.2'
}

compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"

forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
ignoreFailures = true
}

// No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false

thirdPartyAudit.excludes = [
// these classes intentionally use JDK internal API (and this is ok since the project is maintained by Oracle employees)
'org.openjdk.jmh.profile.AbstractHotspotProfiler',
'org.openjdk.jmh.profile.HotspotThreadProfiler',
'org.openjdk.jmh.profile.HotspotClassloadingProfiler',
'org.openjdk.jmh.profile.HotspotCompilationProfiler',
'org.openjdk.jmh.profile.HotspotMemoryProfiler',
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
'org.openjdk.jmh.util.Utils'
]

shadowJar {
classifier = 'benchmarks'
}

// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
task jmhJar(
dependsOn: shadowJar,
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
group: 'Benchmark'
)

task jmh(
dependsOn: runShadow,
description: 'Runs all microbenchmarks',
group: 'Benchmark'
)
@@ -0,0 +1,67 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.benchmark;

import org.joda.time.DateTimeZone;
import org.joda.time.MutableDateTime;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;

import java.time.Duration;
import java.time.Instant;
import java.time.ZoneOffset;
import java.time.ZonedDateTime;
import java.time.temporal.TemporalAmount;
import java.util.Calendar;
import java.util.Locale;
import java.util.TimeZone;

@SuppressWarnings("unused") //invoked by benchmarking framework
@State(Scope.Benchmark)
public class DateBenchmark {
private long instant;

private MutableDateTime jodaDate = new MutableDateTime(0, DateTimeZone.UTC);

private Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("UTC"), Locale.ROOT);

private ZonedDateTime javaDateTime = ZonedDateTime.ofInstant(Instant.ofEpochMilli(0L), ZoneOffset.UTC);

private TemporalAmount diff = Duration.ofMillis(1L);

@Benchmark
public int mutableDateTimeSetMillisGetDayOfMonth() {
jodaDate.setMillis(instant++);
return jodaDate.getDayOfMonth();
}

@Benchmark
public int calendarSetMillisGetDayOfMonth() {
calendar.setTimeInMillis(instant++);
return calendar.get(Calendar.DAY_OF_MONTH);
}

@Benchmark
public int javaDateTimeSetMillisGetDayOfMonth() {
// all classes in java.time are immutable, we have to use a new instance
javaDateTime = javaDateTime.plus(diff);
return javaDateTime.getDayOfMonth();
}
}
@@ -0,0 +1,29 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.benchmark;

import org.openjdk.jmh.annotations.Benchmark;

@SuppressWarnings("unused") //invoked by benchmarking framework
public class HelloBenchmark {
@Benchmark
public void benchmarkRuntimeOverhead() {
//intentionally left blank
}
}

0 comments on commit 5bd2f55

Please sign in to comment.