Skip to content

Commit

Permalink
Merge branch 'master' into feature/seq_no
Browse files Browse the repository at this point in the history
* master: (416 commits)
  docs: removed obsolete information, percolator queries are not longer loaded into jvm heap memory.
  Upgrade JNA to 4.2.2 and remove optionality
  [TEST] Increase timeouts for Rest test client (#19042)
  Update migrate_5_0.asciidoc
  Add ThreadLeakLingering option to Rest client tests
  Add a MultiTermAwareComponent marker interface to analysis factories. #19028
  Attempt at fixing IndexStatsIT.testFilterCacheStats.
  Fix docs build.
  Move templates out of the Search API, into lang-mustache module
  revert - Inline reroute with process of node join/master election (#18938)
  Build valid slices in SearchSourceBuilderTests
  Docs: Convert aggs/misc to CONSOLE
  Docs: migration notes for _timestamp and _ttl
  Group client projects under :client
  [TEST] Add client-test module and make client tests use randomized runner directly
  Move upgrade test to upgrade from version 2.3.3
  Tasks: Add completed to the mapping
  Fail to start if plugin tries broken onModule
  Remove duplicated read byte array methods
  Rename `fields` to `stored_fields` and add `docvalue_fields`
  ...
  • Loading branch information
jasontedor committed Jun 23, 2016
2 parents 275ea68 + 0cae9ad commit 112669d
Show file tree
Hide file tree
Showing 1,427 changed files with 43,034 additions and 17,892 deletions.
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ attention.
-->

- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/.github/CONTRIBUTING.md)?
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?
- If submitting code, have you built your formula locally prior to submission with `gradle check`?
- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.
- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?
8 changes: 4 additions & 4 deletions README.textile
Original file line number Diff line number Diff line change
Expand Up @@ -196,15 +196,15 @@ In order to play with the distributed nature of Elasticsearch, simply bring more

h3. Where to go from here?

We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website.
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website. General questions can be asked on the "Elastic Discourse forum":https://discuss.elastic.co or on IRC on Freenode at "#elasticsearch":https://webchat.freenode.net/#elasticsearch. The Elasticsearch GitHub repository is reserved for bug reports and feature requests only.

h3. Building from Source

Elasticsearch uses "Gradle":http://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.8 should do.
Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.13 should do.

In order to create a distribution, simply run the @gradle build@ command in the cloned directory.
In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.

The distribution for each project will be created under the @target/releases@ directory in that project.
The distribution for each project will be created under the @build/distributions@ directory in that project.

See the "TESTING":TESTING.asciidoc file for more information about
running the Elasticsearch test suite.
Expand Down
2 changes: 1 addition & 1 deletion TESTING.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ gradle :distribution:integ-test-zip:integTest \
-Dtests.method="test {p0=cat.shards/10_basic/Help}"
---------------------------------------------------------------------------

`RestNIT` are the executable test classes that runs all the
`RestIT` are the executable test classes that runs all the
yaml suites available within the `rest-api-spec` folder.

The REST tests support all the options provided by the randomized runner, plus the following:
Expand Down
62 changes: 62 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Elasticsearch Microbenchmark Suite

This directory contains the microbenchmark suite of Elasticsearch. It relies on [JMH](http://openjdk.java.net/projects/code-tools/jmh/).

## Purpose

We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.

## Getting Started

Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.

## Running Microbenchmarks

Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.

Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).

If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with:

```
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
```

JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options.

## Adding Microbenchmarks

Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).

In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.

## Tips and Best Practices

To get realistic results, you should exercise care when running benchmarks. Here are a few tips:

### Do

* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
`performance` CPU governor.
* Vary the problem input size with `@Param`.
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats.
* Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
* Have your benchmarks peer-reviewed.

### Don't

* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`.
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
96 changes: 96 additions & 0 deletions benchmarks/build.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

buildscript {
repositories {
maven {
url 'https://plugins.gradle.org/m2/'
}
}
dependencies {
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3'
}
}

apply plugin: 'elasticsearch.build'
// build an uberjar with all benchmarks
apply plugin: 'com.github.johnrengelman.shadow'
// have the shadow plugin provide the runShadow task
apply plugin: 'application'

archivesBaseName = 'elasticsearch-benchmarks'
mainClassName = 'org.openjdk.jmh.Main'

// never try to invoke tests on the benchmark project - there aren't any
check.dependsOn.remove(test)
// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
task test(type: Test, overwrite: true)

dependencies {
compile("org.elasticsearch:elasticsearch:${version}") {
// JMH ships with the conflicting version 4.6 (JMH will not update this dependency as it is Java 6 compatible and joptsimple is one
// of the most recent compatible version). This prevents us from using jopt-simple in benchmarks (which should be ok) but allows us
// to invoke the JMH uberjar as usual.
exclude group: 'net.sf.jopt-simple', module: 'jopt-simple'
}
compile "org.openjdk.jmh:jmh-core:$versions.jmh"
compile "org.openjdk.jmh:jmh-generator-annprocess:$versions.jmh"
// Dependencies of JMH
runtime 'net.sf.jopt-simple:jopt-simple:4.6'
runtime 'org.apache.commons:commons-math3:3.2'
}

compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"

forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
ignoreFailures = true
}

// No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false

thirdPartyAudit.excludes = [
// these classes intentionally use JDK internal API (and this is ok since the project is maintained by Oracle employees)
'org.openjdk.jmh.profile.AbstractHotspotProfiler',
'org.openjdk.jmh.profile.HotspotThreadProfiler',
'org.openjdk.jmh.profile.HotspotClassloadingProfiler',
'org.openjdk.jmh.profile.HotspotCompilationProfiler',
'org.openjdk.jmh.profile.HotspotMemoryProfiler',
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
'org.openjdk.jmh.util.Utils'
]

shadowJar {
classifier = 'benchmarks'
}

// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
task jmhJar(
dependsOn: shadowJar,
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
group: 'Benchmark'
)

task jmh(
dependsOn: runShadow,
description: 'Runs all microbenchmarks',
group: 'Benchmark'
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.benchmark.routing.allocation;

import org.elasticsearch.Version;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.cluster.routing.RoutingTable;
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.common.settings.Settings;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;

import java.util.Collections;
import java.util.concurrent.TimeUnit;

@Fork(3)
@Warmup(iterations = 10)
@Measurement(iterations = 10)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Benchmark)
@SuppressWarnings("unused") //invoked by benchmarking framework
public class AllocationBenchmark {
// Do NOT make any field final (even if it is not annotated with @Param)! See also
// http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_10_ConstantFold.java

// we cannot use individual @Params as some will lead to invalid combinations which do not let the benchmark terminate. JMH offers no
// support to constrain the combinations of benchmark parameters and we do not want to rely on OptionsBuilder as each benchmark would
// need its own main method and we cannot execute more than one class with a main method per JAR.
@Param({
// indices, shards, replicas, nodes
" 10, 1, 0, 1",
" 10, 3, 0, 1",
" 10, 10, 0, 1",
" 100, 1, 0, 1",
" 100, 3, 0, 1",
" 100, 10, 0, 1",

" 10, 1, 0, 10",
" 10, 3, 0, 10",
" 10, 10, 0, 10",
" 100, 1, 0, 10",
" 100, 3, 0, 10",
" 100, 10, 0, 10",

" 10, 1, 1, 10",
" 10, 3, 1, 10",
" 10, 10, 1, 10",
" 100, 1, 1, 10",
" 100, 3, 1, 10",
" 100, 10, 1, 10",

" 10, 1, 2, 10",
" 10, 3, 2, 10",
" 10, 10, 2, 10",
" 100, 1, 2, 10",
" 100, 3, 2, 10",
" 100, 10, 2, 10",

" 10, 1, 0, 50",
" 10, 3, 0, 50",
" 10, 10, 0, 50",
" 100, 1, 0, 50",
" 100, 3, 0, 50",
" 100, 10, 0, 50",

" 10, 1, 1, 50",
" 10, 3, 1, 50",
" 10, 10, 1, 50",
" 100, 1, 1, 50",
" 100, 3, 1, 50",
" 100, 10, 1, 50",

" 10, 1, 2, 50",
" 10, 3, 2, 50",
" 10, 10, 2, 50",
" 100, 1, 2, 50",
" 100, 3, 2, 50",
" 100, 10, 2, 50"
})
public String indicesShardsReplicasNodes = "10,1,0,1";

public int numTags = 2;

private AllocationService strategy;
private ClusterState initialClusterState;

@Setup
public void setUp() throws Exception {
final String[] params = indicesShardsReplicasNodes.split(",");

int numIndices = toInt(params[0]);
int numShards = toInt(params[1]);
int numReplicas = toInt(params[2]);
int numNodes = toInt(params[3]);

strategy = Allocators.createAllocationService(Settings.builder()
.put("cluster.routing.allocation.awareness.attributes", "tag")
.build());

MetaData.Builder mb = MetaData.builder();
for (int i = 1; i <= numIndices; i++) {
mb.put(IndexMetaData.builder("test_" + i)
.settings(Settings.builder().put("index.version.created", Version.CURRENT))
.numberOfShards(numShards)
.numberOfReplicas(numReplicas)
);
}
MetaData metaData = mb.build();
RoutingTable.Builder rb = RoutingTable.builder();
for (int i = 1; i <= numIndices; i++) {
rb.addAsNew(metaData.index("test_" + i));
}
RoutingTable routingTable = rb.build();
DiscoveryNodes.Builder nb = DiscoveryNodes.builder();
for (int i = 1; i <= numNodes; i++) {
nb.put(Allocators.newNode("node" + i, Collections.singletonMap("tag", "tag_" + (i % numTags))));
}
initialClusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY))
.metaData(metaData).routingTable(routingTable).nodes
(nb).build();
}

private int toInt(String v) {
return Integer.valueOf(v.trim());
}

@Benchmark
public ClusterState measureAllocation() {
ClusterState clusterState = initialClusterState;
while (clusterState.getRoutingNodes().hasUnassignedShards()) {
RoutingAllocation.Result result = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
.shardsWithState(ShardRoutingState.INITIALIZING));
clusterState = ClusterState.builder(clusterState).routingResult(result).build();
result = strategy.reroute(clusterState, "reroute");
clusterState = ClusterState.builder(clusterState).routingResult(result).build();
}
return clusterState;
}
}

0 comments on commit 112669d

Please sign in to comment.