Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix misspelled words in comments, error messages, and test code #32792

Merged
merged 10 commits into from Oct 3, 2018
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Expand Up @@ -320,7 +320,7 @@ have to test Elasticsearch.
#### Configurations

Gradle organizes dependencies and build artifacts into "configurations" and
allows you to use these configurations arbitrarilly. Here are some of the most
allows you to use these configurations arbitrarily. Here are some of the most
common configurations in our build and how we use them:

<dl>
Expand Down
2 changes: 1 addition & 1 deletion TESTING.asciidoc
Expand Up @@ -250,7 +250,7 @@ Pass arbitrary jvm arguments.

Running backwards compatibility tests is disabled by default since it
requires a release version of elasticsearch to be present on the test system.
To run backwards compatibilty tests untar or unzip a release and run the tests
To run backwards compatibility tests untar or unzip a release and run the tests
with the following command:

---------------------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion build.gradle
Expand Up @@ -588,7 +588,7 @@ static void assertLinesInFile(final Path path, final List<String> expectedLines)

/*
* Check that all generated JARs have our NOTICE.txt and an appropriate
* LICENSE.txt in them. We configurate this in gradle but we'd like to
* LICENSE.txt in them. We configure this in gradle but we'd like to
* be extra paranoid.
*/
subprojects { project ->
Expand Down
Expand Up @@ -122,7 +122,7 @@ class VersionCollection {
if (isReleased(version) == false) {
// caveat 1 - This should only ever contain 2 non released branches in flight. An example is 6.x is frozen,
// and 6.2 is cut but not yet released there is some simple logic to make sure that in the case of more than 2,
// it will bail. The order is that the minor snapshot is fufilled first, and then the staged minor snapshot
// it will bail. The order is that the minor snapshot is fulfilled first, and then the staged minor snapshot
if (nextMinorSnapshot == null) {
// it has not been set yet
nextMinorSnapshot = replaceAsSnapshot(version)
Expand Down
Expand Up @@ -72,7 +72,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {

/**
* Root directory containing all the files generated by this task. It is
* contained withing testRoot.
* contained within testRoot.
*/
File outputRoot() {
return new File(testRoot, '/rest-api-spec/test')
Expand Down
Expand Up @@ -331,7 +331,7 @@ class NodeInfo {
case 'deb':
return new File(baseDir, "${distro}-extracted/etc/elasticsearch")
default:
throw new InvalidUserDataException("Unkown distribution: ${distro}")
throw new InvalidUserDataException("Unknown distribution: ${distro}")
}
}
}
Expand Up @@ -22,7 +22,7 @@ task sample {
// dependsOn buildResources.outputDir
// for now it's just
dependsOn buildResources
// we have to refference it at configuration time in order to be picked up
// we have to reference it at configuration time in order to be picked up
ext.checkstyle_suppressions = buildResources.copy('checkstyle_suppressions.xml')
doLast {
println "This task is using ${file(checkstyle_suppressions)}"
Expand All @@ -35,4 +35,4 @@ task noConfigAfterExecution {
println "This should cause an error because we are refferencing " +
"${buildResources.copy('checkstyle_suppressions.xml')} after the `buildResources` task has ran."
}
}
}
Expand Up @@ -228,7 +228,7 @@ static void parse(
// no range is present, apply the JVM option to the specified major version only
upper = lower;
} else if (end == null) {
// a range of the form \\d+- is present, apply the JVM option to all major versions larger than the specifed one
// a range of the form \\d+- is present, apply the JVM option to all major versions larger than the specified one
upper = Integer.MAX_VALUE;
} else {
// a range of the form \\d+-\\d+ is present, apply the JVM option to the specified range of major versions
Expand Down
2 changes: 1 addition & 1 deletion docs/java-rest/low-level/usage.asciidoc
Expand Up @@ -307,7 +307,7 @@ You can also customize the response consumer used to buffer the asynchronous
responses. The default consumer will buffer up to 100MB of response on the
JVM heap. If the response is larger then the request will fail. You could,
for example, lower the maximum size which might be useful if you are running
in a heap constrained environment like the exmaple above.
in a heap constrained environment like the example above.

Once you've created the singleton you can use it when making requests:

Expand Down
2 changes: 1 addition & 1 deletion docs/painless/painless-api-reference.asciidoc
Expand Up @@ -3,7 +3,7 @@

Painless has a strict whitelist for methods and classes to ensure all
painless scripts are secure. Most of these methods are exposed directly
from the Java Runtime Enviroment (JRE) while others are part of
from the Java Runtime Environment (JRE) while others are part of
Elasticsearch or Painless itself. Below is a list of all available
classes grouped with their respected methods. Clicking on the method
name takes you to the documentation for that specific method. Methods
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/repository-gcs.asciidoc
Expand Up @@ -185,7 +185,7 @@ are marked as `Secure`.

`project_id`::

The Google Cloud project id. This will be automatically infered from the credentials file but
The Google Cloud project id. This will be automatically inferred from the credentials file but
can be specified explicitly. For example, it can be used to switch between projects when the
same credentials are usable for both the production and the development projects.

Expand Down
8 changes: 4 additions & 4 deletions docs/plugins/repository-hdfs.asciidoc
Expand Up @@ -32,7 +32,7 @@ PUT _snapshot/my_hdfs_repository
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "elasticsearch/respositories/my_hdfs_repository",
"path": "elasticsearch/repositories/my_hdfs_repository",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jbaiera I can't judge the impact of this change. Ok with you?

"conf.dfs.client.read.shortcircuit": "true"
}
}
Expand Down Expand Up @@ -149,7 +149,7 @@ PUT _snapshot/my_hdfs_repository
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "/user/elasticsearch/respositories/my_hdfs_repository",
"path": "/user/elasticsearch/repositories/my_hdfs_repository",
"security.principal": "elasticsearch@REALM"
}
}
Expand All @@ -167,7 +167,7 @@ PUT _snapshot/my_hdfs_repository
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "/user/elasticsearch/respositories/my_hdfs_repository",
"path": "/user/elasticsearch/repositories/my_hdfs_repository",
"security.principal": "elasticsearch/_HOST@REALM"
}
}
Expand All @@ -186,4 +186,4 @@ extracts for file access checks will be `elasticsearch`.

NOTE: The repository plugin makes no assumptions of what Elasticsearch's principal name is. The main fragment of the
Kerberos principal is not required to be `elasticsearch`. If you have a principal or service name that works better
for you or your organization then feel free to use it instead!
for you or your organization then feel free to use it instead!
2 changes: 1 addition & 1 deletion docs/reference/commands/certutil.asciidoc
Expand Up @@ -72,7 +72,7 @@ parameter or in the `filename` field in an input YAML file.
You can optionally provide IP addresses or DNS names for each instance. If
neither IP addresses nor DNS names are specified, the Elastic stack products
cannot perform hostname verification and you might need to configure the
`verfication_mode` security setting to `certificate` only. For more information
`verification_mode` security setting to `certificate` only. For more information
about this setting, see <<security-settings>>.

All certificates that are generated by this command are signed by a CA. You can
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/mapping/types/percolator.asciidoc
Expand Up @@ -372,7 +372,7 @@ GET /test_index/_search
"percolate" : {
"field" : "query",
"document" : {
"body" : "Bycicles are missing"
"body" : "Bicycles are missing"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@martijnvg I think this change is ok but just checking with you that it wasn't deliberately misspelled for the example.

}
}
}
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/sql/concepts.asciidoc
Expand Up @@ -9,7 +9,7 @@ NOTE: This documentation while trying to be complete, does assume the reader has

As a general rule, {es-sql} as the name indicates provides a SQL interface to {es}. As such, it follows the SQL terminology and conventions first, whenever possible. However the backing engine itself is {es} for which {es-sql} was purposely created hence why features or concepts that are not available, or cannot be mapped correctly, in SQL appear
in {es-sql}.
Last but not least, {es-sql} tries to obey the https://en.wikipedia.org/wiki/Principle_of_least_astonishment[principle of least suprise], though as all things in the world, everything is relative.
Last but not least, {es-sql} tries to obey the https://en.wikipedia.org/wiki/Principle_of_least_astonishment[principle of least surprise], though as all things in the world, everything is relative.

=== Mapping concepts across SQL and {es}

Expand Down Expand Up @@ -43,7 +43,7 @@ Notice that in {es} a field can contain _multiple_ values of the same type (esen

|`catalog` or `database`
|`cluster` instance
|In SQL, `catalog` or `database` are used interchangebly and represent a set of schemas that is, a number of tables.
|In SQL, `catalog` or `database` are used interchangeably and represent a set of schemas that is, a number of tables.
In {es} the set of indices available are grouped in a `cluster`. The semantics also differ a bit; a `database` is essentially yet another namespace (which can have some implications on the way data is stored) while an {es} `cluster` is a runtime instance, or rather a set of at least one {es} instance (typically running distributed).
In practice this means that while in SQL one can potentially have multiple catalogs inside an instance, in {es} one is restricted to only _one_.

Expand All @@ -62,4 +62,4 @@ Multiple clusters, each with its own namespace, connected to each other in a fed

|===

As one can see while the mapping between the concepts are not exactly one to one and the semantics somewhat different, there are more things in common than differences. In fact, thanks to SQL declarative nature, many concepts can move across {es} transparently and the terminology of the two likely to be used interchangebly through-out the rest of the material.
As one can see while the mapping between the concepts are not exactly one to one and the semantics somewhat different, there are more things in common than differences. In fact, thanks to SQL declarative nature, many concepts can move across {es} transparently and the terminology of the two likely to be used interchangeably through-out the rest of the material.
4 changes: 2 additions & 2 deletions docs/reference/sql/endpoints/jdbc.asciidoc
Expand Up @@ -38,7 +38,7 @@ from `artifacts.elastic.co/maven` by adding it to the repositories list:
=== Setup

The driver main class is `org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcDriver`.
Note the driver implements the JDBC 4.0 +Service Provider+ mechanism meaning it is registerd automatically
Note the driver implements the JDBC 4.0 +Service Provider+ mechanism meaning it is registered automatically
as long as its available in the classpath.

Once registered, the driver understands the following syntax as an URL:
Expand Down Expand Up @@ -176,4 +176,4 @@ connection. For example:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{jdbc-tests}/SimpleExampleTestCase.java[simple_example]
--------------------------------------------------
--------------------------------------------------
Expand Up @@ -143,7 +143,7 @@
* described by later documentation.
* <p>
* Storebable nodes have three methods for writing -- setup, load, and store. These methods
* are used in conjuction with a parent node aware of the storeable node (lhs) that has a node
* are used in conjunction with a parent node aware of the storeable node (lhs) that has a node
* representing a value to store (rhs). The setup method is always once called before a store
* to give storeable nodes a chance to write any prefixes they may have and any values such as
* array indices before the store happens. Load is called on a storeable node that must also
Expand All @@ -152,7 +152,7 @@
* Sub nodes are partial nodes that require a parent to work correctly. These nodes can really
* represent anything the parent node would like to split up into logical pieces and don't really
* have any distinct set of rules. The currently existing subnodes all have ANode as a super class
* somewhere in their class heirachy so the parent node can defer some analysis and writing to
* somewhere in their class hierarchy so the parent node can defer some analysis and writing to
* the sub node.
*/
package org.elasticsearch.painless.node;
Expand Up @@ -434,7 +434,7 @@ private static String javadocRoot(Class<?> clazz) {
if (classPackage.startsWith("org.apache.lucene")) {
return "lucene-core";
}
throw new IllegalArgumentException("Unrecognized packge: " + classPackage);
throw new IllegalArgumentException("Unrecognized package: " + classPackage);
}

private static void emitGeneratedWarning(PrintStream stream) {
Expand Down
Expand Up @@ -83,7 +83,7 @@ private static ContextSetup randomContextSetup() {
QueryBuilder query = randomBoolean() ? new MatchAllQueryBuilder() : null;
// TODO: pass down XContextType to createTestInstance() method.
// otherwise the document itself is different causing test failures.
// This should be done in a seperate change as the test instance is created before xcontent type is randomly picked and
// This should be done in a separate change as the test instance is created before xcontent type is randomly picked and
// all the createTestInstance() methods need to be changed, which will make this a big chnage
// BytesReference doc = randomBoolean() ? new BytesArray("{}") : null;
BytesReference doc = null;
Expand Down
Expand Up @@ -42,8 +42,8 @@

/**
* Returns the results for a {@link RankEvalRequest}.<br>
* The repsonse contains a detailed section for each evaluation query in the request and
* possible failures that happened when executin individual queries.
* The response contains a detailed section for each evaluation query in the request and
* possible failures that happened when execution individual queries.
**/
public class RankEvalResponse extends ActionResponse implements ToXContentObject {

Expand Down
Expand Up @@ -481,7 +481,7 @@ public ScheduledFuture<?> schedule(TimeValue delay, String name, Runnable comman

/**
* Execute a bulk retry test case. The total number of failures is random and the number of retries attempted is set to
* testRequest.getMaxRetries and controled by the failWithRejection parameter.
* testRequest.getMaxRetries and controlled by the failWithRejection parameter.
*/
private void bulkRetryTestCase(boolean failWithRejection) throws Exception {
int totalFailures = randomIntBetween(1, testRequest.getMaxRetries());
Expand Down
Expand Up @@ -122,7 +122,7 @@ private void testCancel(String action, AbstractBulkByScrollRequestBuilder<?, ?>
logger.debug("waiting for updates to be blocked");
boolean blocked = awaitBusy(
() -> ALLOWED_OPERATIONS.hasQueuedThreads() && ALLOWED_OPERATIONS.availablePermits() == 0,
1, TimeUnit.MINUTES); // 10 seconds is usually fine but on heavilly loaded machines this can wake a while
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/wake/take/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also fixed.

1, TimeUnit.MINUTES); // 10 seconds is usually fine but on heavily loaded machines this can wake a while
assertTrue("updates blocked", blocked);

// Status should show the task running
Expand Down