-
Notifications
You must be signed in to change notification settings - Fork 24.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' into feature/seq_no
* master: (416 commits) docs: removed obsolete information, percolator queries are not longer loaded into jvm heap memory. Upgrade JNA to 4.2.2 and remove optionality [TEST] Increase timeouts for Rest test client (#19042) Update migrate_5_0.asciidoc Add ThreadLeakLingering option to Rest client tests Add a MultiTermAwareComponent marker interface to analysis factories. #19028 Attempt at fixing IndexStatsIT.testFilterCacheStats. Fix docs build. Move templates out of the Search API, into lang-mustache module revert - Inline reroute with process of node join/master election (#18938) Build valid slices in SearchSourceBuilderTests Docs: Convert aggs/misc to CONSOLE Docs: migration notes for _timestamp and _ttl Group client projects under :client [TEST] Add client-test module and make client tests use randomized runner directly Move upgrade test to upgrade from version 2.3.3 Tasks: Add completed to the mapping Fail to start if plugin tries broken onModule Remove duplicated read byte array methods Rename `fields` to `stored_fields` and add `docvalue_fields` ...
- Loading branch information
Showing
1,427 changed files
with
43,034 additions
and
17,892 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
# Elasticsearch Microbenchmark Suite | ||
|
||
This directory contains the microbenchmark suite of Elasticsearch. It relies on [JMH](http://openjdk.java.net/projects/code-tools/jmh/). | ||
|
||
## Purpose | ||
|
||
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our | ||
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with | ||
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components. | ||
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR. | ||
|
||
## Getting Started | ||
|
||
Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result. | ||
|
||
## Running Microbenchmarks | ||
|
||
Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`. | ||
|
||
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks). | ||
|
||
If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements | ||
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with: | ||
|
||
``` | ||
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar | ||
``` | ||
|
||
JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options. | ||
|
||
## Adding Microbenchmarks | ||
|
||
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the | ||
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/). | ||
|
||
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and | ||
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`. | ||
|
||
## Tips and Best Practices | ||
|
||
To get realistic results, you should exercise care when running benchmarks. Here are a few tips: | ||
|
||
### Do | ||
|
||
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary | ||
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance. | ||
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults. | ||
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`. | ||
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the | ||
`performance` CPU governor. | ||
* Vary the problem input size with `@Param`. | ||
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses: | ||
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews | ||
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats. | ||
* Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots. | ||
* Have your benchmarks peer-reviewed. | ||
|
||
### Don't | ||
|
||
* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`. | ||
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks). | ||
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,96 @@ | ||
/* | ||
* Licensed to Elasticsearch under one or more contributor | ||
* license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright | ||
* ownership. Elasticsearch licenses this file to you under | ||
* the Apache License, Version 2.0 (the "License"); you may | ||
* not use this file except in compliance with the License. | ||
* You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, | ||
* software distributed under the License is distributed on an | ||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
* KIND, either express or implied. See the License for the | ||
* specific language governing permissions and limitations | ||
* under the License. | ||
*/ | ||
|
||
buildscript { | ||
repositories { | ||
maven { | ||
url 'https://plugins.gradle.org/m2/' | ||
} | ||
} | ||
dependencies { | ||
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' | ||
} | ||
} | ||
|
||
apply plugin: 'elasticsearch.build' | ||
// build an uberjar with all benchmarks | ||
apply plugin: 'com.github.johnrengelman.shadow' | ||
// have the shadow plugin provide the runShadow task | ||
apply plugin: 'application' | ||
|
||
archivesBaseName = 'elasticsearch-benchmarks' | ||
mainClassName = 'org.openjdk.jmh.Main' | ||
|
||
// never try to invoke tests on the benchmark project - there aren't any | ||
check.dependsOn.remove(test) | ||
// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip | ||
task test(type: Test, overwrite: true) | ||
|
||
dependencies { | ||
compile("org.elasticsearch:elasticsearch:${version}") { | ||
// JMH ships with the conflicting version 4.6 (JMH will not update this dependency as it is Java 6 compatible and joptsimple is one | ||
// of the most recent compatible version). This prevents us from using jopt-simple in benchmarks (which should be ok) but allows us | ||
// to invoke the JMH uberjar as usual. | ||
exclude group: 'net.sf.jopt-simple', module: 'jopt-simple' | ||
} | ||
compile "org.openjdk.jmh:jmh-core:$versions.jmh" | ||
compile "org.openjdk.jmh:jmh-generator-annprocess:$versions.jmh" | ||
// Dependencies of JMH | ||
runtime 'net.sf.jopt-simple:jopt-simple:4.6' | ||
runtime 'org.apache.commons:commons-math3:3.2' | ||
} | ||
|
||
compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked" | ||
compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked" | ||
|
||
forbiddenApis { | ||
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes | ||
ignoreFailures = true | ||
} | ||
|
||
// No licenses for our benchmark deps (we don't ship benchmarks) | ||
dependencyLicenses.enabled = false | ||
|
||
thirdPartyAudit.excludes = [ | ||
// these classes intentionally use JDK internal API (and this is ok since the project is maintained by Oracle employees) | ||
'org.openjdk.jmh.profile.AbstractHotspotProfiler', | ||
'org.openjdk.jmh.profile.HotspotThreadProfiler', | ||
'org.openjdk.jmh.profile.HotspotClassloadingProfiler', | ||
'org.openjdk.jmh.profile.HotspotCompilationProfiler', | ||
'org.openjdk.jmh.profile.HotspotMemoryProfiler', | ||
'org.openjdk.jmh.profile.HotspotRuntimeProfiler', | ||
'org.openjdk.jmh.util.Utils' | ||
] | ||
|
||
shadowJar { | ||
classifier = 'benchmarks' | ||
} | ||
|
||
// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface | ||
task jmhJar( | ||
dependsOn: shadowJar, | ||
description: 'Generates an uberjar with the microbenchmarks and all dependencies', | ||
group: 'Benchmark' | ||
) | ||
|
||
task jmh( | ||
dependsOn: runShadow, | ||
description: 'Runs all microbenchmarks', | ||
group: 'Benchmark' | ||
) |
171 changes: 171 additions & 0 deletions
171
...rks/src/main/java/org/elasticsearch/benchmark/routing/allocation/AllocationBenchmark.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,171 @@ | ||
/* | ||
* Licensed to Elasticsearch under one or more contributor | ||
* license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright | ||
* ownership. Elasticsearch licenses this file to you under | ||
* the Apache License, Version 2.0 (the "License"); you may | ||
* not use this file except in compliance with the License. | ||
* You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, | ||
* software distributed under the License is distributed on an | ||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
* KIND, either express or implied. See the License for the | ||
* specific language governing permissions and limitations | ||
* under the License. | ||
*/ | ||
package org.elasticsearch.benchmark.routing.allocation; | ||
|
||
import org.elasticsearch.Version; | ||
import org.elasticsearch.cluster.ClusterName; | ||
import org.elasticsearch.cluster.ClusterState; | ||
import org.elasticsearch.cluster.metadata.IndexMetaData; | ||
import org.elasticsearch.cluster.metadata.MetaData; | ||
import org.elasticsearch.cluster.node.DiscoveryNodes; | ||
import org.elasticsearch.cluster.routing.RoutingTable; | ||
import org.elasticsearch.cluster.routing.ShardRoutingState; | ||
import org.elasticsearch.cluster.routing.allocation.AllocationService; | ||
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation; | ||
import org.elasticsearch.common.settings.Settings; | ||
import org.openjdk.jmh.annotations.Benchmark; | ||
import org.openjdk.jmh.annotations.BenchmarkMode; | ||
import org.openjdk.jmh.annotations.Fork; | ||
import org.openjdk.jmh.annotations.Measurement; | ||
import org.openjdk.jmh.annotations.Mode; | ||
import org.openjdk.jmh.annotations.OutputTimeUnit; | ||
import org.openjdk.jmh.annotations.Param; | ||
import org.openjdk.jmh.annotations.Scope; | ||
import org.openjdk.jmh.annotations.Setup; | ||
import org.openjdk.jmh.annotations.State; | ||
import org.openjdk.jmh.annotations.Warmup; | ||
|
||
import java.util.Collections; | ||
import java.util.concurrent.TimeUnit; | ||
|
||
@Fork(3) | ||
@Warmup(iterations = 10) | ||
@Measurement(iterations = 10) | ||
@BenchmarkMode(Mode.AverageTime) | ||
@OutputTimeUnit(TimeUnit.MILLISECONDS) | ||
@State(Scope.Benchmark) | ||
@SuppressWarnings("unused") //invoked by benchmarking framework | ||
public class AllocationBenchmark { | ||
// Do NOT make any field final (even if it is not annotated with @Param)! See also | ||
// http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_10_ConstantFold.java | ||
|
||
// we cannot use individual @Params as some will lead to invalid combinations which do not let the benchmark terminate. JMH offers no | ||
// support to constrain the combinations of benchmark parameters and we do not want to rely on OptionsBuilder as each benchmark would | ||
// need its own main method and we cannot execute more than one class with a main method per JAR. | ||
@Param({ | ||
// indices, shards, replicas, nodes | ||
" 10, 1, 0, 1", | ||
" 10, 3, 0, 1", | ||
" 10, 10, 0, 1", | ||
" 100, 1, 0, 1", | ||
" 100, 3, 0, 1", | ||
" 100, 10, 0, 1", | ||
|
||
" 10, 1, 0, 10", | ||
" 10, 3, 0, 10", | ||
" 10, 10, 0, 10", | ||
" 100, 1, 0, 10", | ||
" 100, 3, 0, 10", | ||
" 100, 10, 0, 10", | ||
|
||
" 10, 1, 1, 10", | ||
" 10, 3, 1, 10", | ||
" 10, 10, 1, 10", | ||
" 100, 1, 1, 10", | ||
" 100, 3, 1, 10", | ||
" 100, 10, 1, 10", | ||
|
||
" 10, 1, 2, 10", | ||
" 10, 3, 2, 10", | ||
" 10, 10, 2, 10", | ||
" 100, 1, 2, 10", | ||
" 100, 3, 2, 10", | ||
" 100, 10, 2, 10", | ||
|
||
" 10, 1, 0, 50", | ||
" 10, 3, 0, 50", | ||
" 10, 10, 0, 50", | ||
" 100, 1, 0, 50", | ||
" 100, 3, 0, 50", | ||
" 100, 10, 0, 50", | ||
|
||
" 10, 1, 1, 50", | ||
" 10, 3, 1, 50", | ||
" 10, 10, 1, 50", | ||
" 100, 1, 1, 50", | ||
" 100, 3, 1, 50", | ||
" 100, 10, 1, 50", | ||
|
||
" 10, 1, 2, 50", | ||
" 10, 3, 2, 50", | ||
" 10, 10, 2, 50", | ||
" 100, 1, 2, 50", | ||
" 100, 3, 2, 50", | ||
" 100, 10, 2, 50" | ||
}) | ||
public String indicesShardsReplicasNodes = "10,1,0,1"; | ||
|
||
public int numTags = 2; | ||
|
||
private AllocationService strategy; | ||
private ClusterState initialClusterState; | ||
|
||
@Setup | ||
public void setUp() throws Exception { | ||
final String[] params = indicesShardsReplicasNodes.split(","); | ||
|
||
int numIndices = toInt(params[0]); | ||
int numShards = toInt(params[1]); | ||
int numReplicas = toInt(params[2]); | ||
int numNodes = toInt(params[3]); | ||
|
||
strategy = Allocators.createAllocationService(Settings.builder() | ||
.put("cluster.routing.allocation.awareness.attributes", "tag") | ||
.build()); | ||
|
||
MetaData.Builder mb = MetaData.builder(); | ||
for (int i = 1; i <= numIndices; i++) { | ||
mb.put(IndexMetaData.builder("test_" + i) | ||
.settings(Settings.builder().put("index.version.created", Version.CURRENT)) | ||
.numberOfShards(numShards) | ||
.numberOfReplicas(numReplicas) | ||
); | ||
} | ||
MetaData metaData = mb.build(); | ||
RoutingTable.Builder rb = RoutingTable.builder(); | ||
for (int i = 1; i <= numIndices; i++) { | ||
rb.addAsNew(metaData.index("test_" + i)); | ||
} | ||
RoutingTable routingTable = rb.build(); | ||
DiscoveryNodes.Builder nb = DiscoveryNodes.builder(); | ||
for (int i = 1; i <= numNodes; i++) { | ||
nb.put(Allocators.newNode("node" + i, Collections.singletonMap("tag", "tag_" + (i % numTags)))); | ||
} | ||
initialClusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) | ||
.metaData(metaData).routingTable(routingTable).nodes | ||
(nb).build(); | ||
} | ||
|
||
private int toInt(String v) { | ||
return Integer.valueOf(v.trim()); | ||
} | ||
|
||
@Benchmark | ||
public ClusterState measureAllocation() { | ||
ClusterState clusterState = initialClusterState; | ||
while (clusterState.getRoutingNodes().hasUnassignedShards()) { | ||
RoutingAllocation.Result result = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes() | ||
.shardsWithState(ShardRoutingState.INITIALIZING)); | ||
clusterState = ClusterState.builder(clusterState).routingResult(result).build(); | ||
result = strategy.reroute(clusterState, "reroute"); | ||
clusterState = ClusterState.builder(clusterState).routingResult(result).build(); | ||
} | ||
return clusterState; | ||
} | ||
} |
Oops, something went wrong.