Skip to content

Commit

Permalink
Fix a variety of typos and misspelled words (#32792)
Browse files Browse the repository at this point in the history
  • Loading branch information
seratch authored and DaveCTurner committed Oct 3, 2018
1 parent ee21067 commit d45fe43
Show file tree
Hide file tree
Showing 101 changed files with 153 additions and 153 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ have to test Elasticsearch.
#### Configurations

Gradle organizes dependencies and build artifacts into "configurations" and
allows you to use these configurations arbitrarilly. Here are some of the most
allows you to use these configurations arbitrarily. Here are some of the most
common configurations in our build and how we use them:

<dl>
Expand Down
2 changes: 1 addition & 1 deletion TESTING.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ Pass arbitrary jvm arguments.

Running backwards compatibility tests is disabled by default since it
requires a release version of elasticsearch to be present on the test system.
To run backwards compatibilty tests untar or unzip a release and run the tests
To run backwards compatibility tests untar or unzip a release and run the tests
with the following command:

---------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ class VersionCollection {
if (isReleased(version) == false) {
// caveat 1 - This should only ever contain 2 non released branches in flight. An example is 6.x is frozen,
// and 6.2 is cut but not yet released there is some simple logic to make sure that in the case of more than 2,
// it will bail. The order is that the minor snapshot is fufilled first, and then the staged minor snapshot
// it will bail. The order is that the minor snapshot is fulfilled first, and then the staged minor snapshot
if (nextMinorSnapshot == null) {
// it has not been set yet
nextMinorSnapshot = replaceAsSnapshot(version)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {

/**
* Root directory containing all the files generated by this task. It is
* contained withing testRoot.
* contained within testRoot.
*/
File outputRoot() {
return new File(testRoot, '/rest-api-spec/test')
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ class NodeInfo {
case 'deb':
return new File(baseDir, "${distro}-extracted/etc/elasticsearch")
default:
throw new InvalidUserDataException("Unkown distribution: ${distro}")
throw new InvalidUserDataException("Unknown distribution: ${distro}")
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ task sample {
// dependsOn buildResources.outputDir
// for now it's just
dependsOn buildResources
// we have to refference it at configuration time in order to be picked up
// we have to reference it at configuration time in order to be picked up
ext.checkstyle_suppressions = buildResources.copy('checkstyle_suppressions.xml')
doLast {
println "This task is using ${file(checkstyle_suppressions)}"
Expand All @@ -35,4 +35,4 @@ task noConfigAfterExecution {
println "This should cause an error because we are refferencing " +
"${buildResources.copy('checkstyle_suppressions.xml')} after the `buildResources` task has ran."
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ public boolean isUseNull() {
}

/**
* Excludes frequently-occuring metrics from the analysis;
* Excludes frequently-occurring metrics from the analysis;
* can apply to 'by' field, 'over' field, or both
*
* @return the value that the user set
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ static void parse(
// no range is present, apply the JVM option to the specified major version only
upper = lower;
} else if (end == null) {
// a range of the form \\d+- is present, apply the JVM option to all major versions larger than the specifed one
// a range of the form \\d+- is present, apply the JVM option to all major versions larger than the specified one
upper = Integer.MAX_VALUE;
} else {
// a range of the form \\d+-\\d+ is present, apply the JVM option to the specified range of major versions
Expand Down
2 changes: 1 addition & 1 deletion docs/java-rest/low-level/usage.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,7 @@ You can also customize the response consumer used to buffer the asynchronous
responses. The default consumer will buffer up to 100MB of response on the
JVM heap. If the response is larger then the request will fail. You could,
for example, lower the maximum size which might be useful if you are running
in a heap constrained environment like the exmaple above.
in a heap constrained environment like the example above.

Once you've created the singleton you can use it when making requests:

Expand Down
2 changes: 1 addition & 1 deletion docs/painless/painless-api-reference.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Painless has a strict whitelist for methods and classes to ensure all
painless scripts are secure. Most of these methods are exposed directly
from the Java Runtime Enviroment (JRE) while others are part of
from the Java Runtime Environment (JRE) while others are part of
Elasticsearch or Painless itself. Below is a list of all available
classes grouped with their respected methods. Clicking on the method
name takes you to the documentation for that specific method. Methods
Expand Down
8 changes: 4 additions & 4 deletions docs/plugins/repository-hdfs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ PUT _snapshot/my_hdfs_repository
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "elasticsearch/respositories/my_hdfs_repository",
"path": "elasticsearch/repositories/my_hdfs_repository",
"conf.dfs.client.read.shortcircuit": "true"
}
}
Expand Down Expand Up @@ -149,7 +149,7 @@ PUT _snapshot/my_hdfs_repository
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "/user/elasticsearch/respositories/my_hdfs_repository",
"path": "/user/elasticsearch/repositories/my_hdfs_repository",
"security.principal": "elasticsearch@REALM"
}
}
Expand All @@ -167,7 +167,7 @@ PUT _snapshot/my_hdfs_repository
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "/user/elasticsearch/respositories/my_hdfs_repository",
"path": "/user/elasticsearch/repositories/my_hdfs_repository",
"security.principal": "elasticsearch/_HOST@REALM"
}
}
Expand All @@ -186,4 +186,4 @@ extracts for file access checks will be `elasticsearch`.

NOTE: The repository plugin makes no assumptions of what Elasticsearch's principal name is. The main fragment of the
Kerberos principal is not required to be `elasticsearch`. If you have a principal or service name that works better
for you or your organization then feel free to use it instead!
for you or your organization then feel free to use it instead!
2 changes: 1 addition & 1 deletion docs/reference/commands/certutil.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ parameter or in the `filename` field in an input YAML file.
You can optionally provide IP addresses or DNS names for each instance. If
neither IP addresses nor DNS names are specified, the Elastic stack products
cannot perform hostname verification and you might need to configure the
`verfication_mode` security setting to `certificate` only. For more information
`verification_mode` security setting to `certificate` only. For more information
about this setting, see <<security-settings>>.

All certificates that are generated by this command are signed by a CA. You can
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ml/apis/jobcounts.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ The `forecasts_stats` object shows statistics about forecasts. It has the follow
(object) Counts per forecast status, for example: {"finished" : 2}.

NOTE: `memory_bytes`, `records`, `processing_time_ms` and `status` require at least 1 forecast, otherwise
these fields are ommitted.
these fields are omitted.

[float]
[[ml-stats-node]]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/rollup/apis/rollup-caps.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ For more information, see
==== Examples

Imagine we have an index named `sensor-1` full of raw data. We know that the data will grow over time, so there
will be a `sensor-2`, `sensor-3`, etc. Let's create a Rollup job that targets the index pattern `sensor-*` to accomodate
will be a `sensor-2`, `sensor-3`, etc. Let's create a Rollup job that targets the index pattern `sensor-*` to accommodate
this future scaling:

[source,js]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/sql/concepts.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -62,4 +62,4 @@ Multiple clusters, each with its own namespace, connected to each other in a fed

|===

As one can see while the mapping between the concepts are not exactly one to one and the semantics somewhat different, there are more things in common than differences. In fact, thanks to SQL declarative nature, many concepts can move across {es} transparently and the terminology of the two likely to be used interchangeably through-out the rest of the material.
As one can see while the mapping between the concepts are not exactly one to one and the semantics somewhat different, there are more things in common than differences. In fact, thanks to SQL declarative nature, many concepts can move across {es} transparently and the terminology of the two likely to be used interchangeably through-out the rest of the material.
4 changes: 2 additions & 2 deletions docs/reference/sql/endpoints/jdbc.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ from `artifacts.elastic.co/maven` by adding it to the repositories list:
=== Setup

The driver main class is `org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcDriver`.
Note the driver implements the JDBC 4.0 +Service Provider+ mechanism meaning it is registerd automatically
Note the driver implements the JDBC 4.0 +Service Provider+ mechanism meaning it is registered automatically
as long as its available in the classpath.

Once registered, the driver understands the following syntax as an URL:
Expand Down Expand Up @@ -182,4 +182,4 @@ connection. For example:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{jdbc-tests}/SimpleExampleTestCase.java[simple_example]
--------------------------------------------------
--------------------------------------------------
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@
* described by later documentation.
* <p>
* Storebable nodes have three methods for writing -- setup, load, and store. These methods
* are used in conjuction with a parent node aware of the storeable node (lhs) that has a node
* are used in conjunction with a parent node aware of the storeable node (lhs) that has a node
* representing a value to store (rhs). The setup method is always once called before a store
* to give storeable nodes a chance to write any prefixes they may have and any values such as
* array indices before the store happens. Load is called on a storeable node that must also
Expand All @@ -152,7 +152,7 @@
* Sub nodes are partial nodes that require a parent to work correctly. These nodes can really
* represent anything the parent node would like to split up into logical pieces and don't really
* have any distinct set of rules. The currently existing subnodes all have ANode as a super class
* somewhere in their class heirachy so the parent node can defer some analysis and writing to
* somewhere in their class hierarchy so the parent node can defer some analysis and writing to
* the sub node.
*/
package org.elasticsearch.painless.node;
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ private static String javadocRoot(Class<?> clazz) {
if (classPackage.startsWith("org.apache.lucene")) {
return "lucene-core";
}
throw new IllegalArgumentException("Unrecognized packge: " + classPackage);
throw new IllegalArgumentException("Unrecognized package: " + classPackage);
}

private static void emitGeneratedWarning(PrintStream stream) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ private static ContextSetup randomContextSetup() {
QueryBuilder query = randomBoolean() ? new MatchAllQueryBuilder() : null;
// TODO: pass down XContextType to createTestInstance() method.
// otherwise the document itself is different causing test failures.
// This should be done in a seperate change as the test instance is created before xcontent type is randomly picked and
// This should be done in a separate change as the test instance is created before xcontent type is randomly picked and
// all the createTestInstance() methods need to be changed, which will make this a big chnage
// BytesReference doc = randomBoolean() ? new BytesArray("{}") : null;
BytesReference doc = null;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,8 @@

/**
* Returns the results for a {@link RankEvalRequest}.<br>
* The repsonse contains a detailed section for each evaluation query in the request and
* possible failures that happened when executin individual queries.
* The response contains a detailed section for each evaluation query in the request and
* possible failures that happened when execution individual queries.
**/
public class RankEvalResponse extends ActionResponse implements ToXContentObject {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,7 @@ public ScheduledFuture<?> schedule(TimeValue delay, String name, Runnable comman

/**
* Execute a bulk retry test case. The total number of failures is random and the number of retries attempted is set to
* testRequest.getMaxRetries and controled by the failWithRejection parameter.
* testRequest.getMaxRetries and controlled by the failWithRejection parameter.
*/
private void bulkRetryTestCase(boolean failWithRejection) throws Exception {
int totalFailures = randomIntBetween(1, testRequest.getMaxRetries());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ private void testCancel(String action, AbstractBulkByScrollRequestBuilder<?, ?>
logger.debug("waiting for updates to be blocked");
boolean blocked = awaitBusy(
() -> ALLOWED_OPERATIONS.hasQueuedThreads() && ALLOWED_OPERATIONS.availablePermits() == 0,
1, TimeUnit.MINUTES); // 10 seconds is usually fine but on heavilly loaded machines this can wake a while
1, TimeUnit.MINUTES); // 10 seconds is usually fine but on heavily loaded machines this can take a while
assertTrue("updates blocked", blocked);

// Status should show the task running
Expand Down
Loading

0 comments on commit d45fe43

Please sign in to comment.