Skip to content

Commit

Permalink
Merge remote-tracking branch 'elastic/master' into allowed-index-stor…
Browse files Browse the repository at this point in the history
…e-types

* elastic/master: (199 commits)
  Watcher: Remove unused hipchat render method (elastic#32211)
  Watcher: Remove extraneous auth classes (elastic#32300)
  Watcher: migrate PagerDuty v1 events API to v2 API (elastic#32285)
  [TEST] Select free port for Minio (elastic#32837)
  MINOR: Remove `IndexTemplateFilter` (elastic#32841)
  Core: Add java time version of rounding classes (elastic#32641)
  Aggregations/HL Rest client fix: missing scores (elastic#32774)
  HLRC: Add Delete License API (elastic#32586)
  INGEST: Create Index Before Pipeline Execute (elastic#32786)
  Fix NOOP bulk updates (elastic#32819)
  Remove client connections from TcpTransport (elastic#31886)
  Increase logging testRetentionPolicyChangeDuringRecovery
  AwaitsFix case-functions.sql-spec
  Mute security-cli tests in FIPS JVM (elastic#32812)
  SCRIPTING: Support BucketAggScript return null (elastic#32811)
  Unmute WildFly tests in FIPS JVM (elastic#32814)
  [TEST] Force a stop to save rollup state before continuing (elastic#32787)
  [test] disable packaging tests for suse boxes
  Mute IndicesRequestIT#testBulk
  [ML][DOCS] Refer to rules feature as custom rules (elastic#32785)
  ...
  • Loading branch information
jasontedor committed Aug 14, 2018
2 parents 38e64c2 + 6fde9e5 commit ec4ab87
Show file tree
Hide file tree
Showing 1,135 changed files with 36,774 additions and 11,423 deletions.
6 changes: 6 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,12 @@ JDK 10 and testing on a JDK 8 runtime; to do this, set `RUNTIME_JAVA_HOME`
pointing to the Java home of a JDK 8 installation. Note that this mechanism can
be used to test against other JDKs as well, this is not only limited to JDK 8.

> Note: It is also required to have `JAVA7_HOME`, `JAVA8_HOME` and
`JAVA10_HOME` available so that the tests can pass.

> Warning: do not use `sdkman` for Java installations which do not have proper
`jrunscript` for jdk distributions.

Elasticsearch uses the Gradle wrapper for its build. You can execute Gradle
using the wrapper via the `gradlew` script in the root of the repository.

Expand Down
8 changes: 2 additions & 6 deletions README.textile
Original file line number Diff line number Diff line change
Expand Up @@ -209,10 +209,6 @@ The distribution for each project will be created under the @build/distributions

See the "TESTING":TESTING.asciidoc file for more information about running the Elasticsearch test suite.

h3. Upgrading from Elasticsearch 1.x?
h3. Upgrading from older Elasticsearch versions

In order to ensure a smooth upgrade process from earlier versions of
Elasticsearch (1.x), it is required to perform a full cluster restart. Please
see the "setup reference":
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html
for more details on the upgrade process.
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch, please see our "upgrade documentation":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.
5 changes: 0 additions & 5 deletions Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -115,11 +115,6 @@ Vagrant.configure(2) do |config|
'opensuse-42'.tap do |box|
config.vm.define box, define_opts do |config|
config.vm.box = 'elastic/opensuse-42-x86_64'

# https://github.com/elastic/elasticsearch/issues/30295
config.vm.provider 'virtualbox' do |vbox|
vbox.customize ['storagectl', :id, '--name', 'SATA Controller', '--hostiocache', 'on']
end
suse_common config, box
end
end
Expand Down
37 changes: 20 additions & 17 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,39 @@ This directory contains the microbenchmark suite of Elasticsearch. It relies on

## Purpose

We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.

## Getting Started

Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.
Just run `gradlew -p benchmarks run` from the project root
directory. It will build all microbenchmarks, execute them and print
the result.

## Running Microbenchmarks

Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.

Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
Running via an IDE is not supported as the results are meaningless
because we have no control over the JVM running the benchmarks.

If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with:
If you want to run a specific benchmark class like, say,
`MemoryStatsBenchmark`, you can use `--args`:

```
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
gradlew -p benchmarks run --args ' MemoryStatsBenchmark'
```

JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options.
Everything in the `'` gets sent on the command line to JMH. The leading ` `
inside the `'`s is important. Without it parameters are sometimes sent to
gradle.

## Adding Microbenchmarks

Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).

In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.

## Tips and Best Practices
Expand All @@ -42,15 +45,15 @@ To get realistic results, you should exercise care when running benchmarks. Here

### Do

* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
`performance` CPU governor.
* Vary the problem input size with `@Param`.
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats.
* Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
* Have your benchmarks peer-reviewed.
Expand All @@ -59,4 +62,4 @@ To get realistic results, you should exercise care when running benchmarks. Here

* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`.
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
28 changes: 3 additions & 25 deletions benchmarks/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,8 @@
*/

apply plugin: 'elasticsearch.build'

// order of this section matters, see: https://github.com/johnrengelman/shadow/issues/336
apply plugin: 'application' // have the shadow plugin provide the runShadow task
apply plugin: 'application'
mainClassName = 'org.openjdk.jmh.Main'
apply plugin: 'com.github.johnrengelman.shadow' // build an uberjar with all benchmarks

// Not published so no need to assemble
tasks.remove(assemble)
Expand Down Expand Up @@ -50,10 +47,8 @@ compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-u
// needs to be added separately otherwise Gradle will quote it and javac will fail
compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])

forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
ignoreFailures = true
}
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
forbiddenApisMain.enabled = false

// No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false
Expand All @@ -69,20 +64,3 @@ thirdPartyAudit.excludes = [
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
'org.openjdk.jmh.util.Utils'
]

runShadow {
executable = new File(project.runtimeJavaHome, 'bin/java')
}

// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
task jmhJar(
dependsOn: shadowJar,
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
group: 'Benchmark'
)

task jmh(
dependsOn: runShadow,
description: 'Runs all microbenchmarks',
group: 'Benchmark'
)
2 changes: 1 addition & 1 deletion benchmarks/src/main/resources/log4j2.properties
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %m%n

# Do not log at all if it is not really critical - we're in a benchmark
rootLogger.level = error
Expand Down

0 comments on commit ec4ab87

Please sign in to comment.