Skip to content

Commit

Permalink
Merge branch '6.7' into retention-lease-bwc
Browse files Browse the repository at this point in the history
* 6.7: (39 commits)
  Remove beta label from CCR (elastic#39722)
  Rename retention lease setting (elastic#39719)
  Add Docker build type (elastic#39378)
  Use any index specified by .watches for Watcher (elastic#39541) (elastic#39706)
  Add documentation on remote recovery (elastic#39483)
  fix typo in synonym graph filter docs
  Removed incorrect ML YAML tests (elastic#39400)
  Improved Terms Aggregation documentation (elastic#38892)
  Fix Fuzziness#asDistance(String) (elastic#39643)
  Revert "unmute EvilLoggerTests#testDeprecatedSettings (elastic#38743)"
  Mute TokenAuthIntegTests.testExpiredTokensDeletedAfterExpiration (elastic#39690)
  Fix security index auto-create and state recovery race (elastic#39582)
  [DOCS] Sorts security APIs
  Check for .watches that wasn't upgraded properly (elastic#39609)
  Assert recovery done in testDoNotWaitForPendingSeqNo (elastic#39595)
  [DOCS] Updates API in Watcher transform context (elastic#39540)
  Fixing the custom object serialization bug in diffable utils. (elastic#39544)
  mute test
  SQL: Don't allow inexact fields for MIN/MAX (elastic#39563)
  Update release notes for 6.7.0
  ...
  • Loading branch information
jasontedor committed Mar 6, 2019
2 parents c2d036f + 8acf59b commit 10a8554
Show file tree
Hide file tree
Showing 141 changed files with 2,659 additions and 934 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ class ClusterConfiguration {
}
if (ant.properties.containsKey("failed.${seedNode.transportPortsFile.path}".toString())) {
throw new GradleException("Failed to locate seed node transport file [${seedNode.transportPortsFile}]: " +
"timed out waiting for it to be created after ${waitSeconds} seconds")
"timed out waiting for it to be created after 40 seconds")
}
return seedNode.transportUri()
}
Expand Down
2 changes: 2 additions & 0 deletions distribution/docker/src/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ WORKDIR /usr/share/elasticsearch
${source_elasticsearch}

RUN tar zxf /opt/${elasticsearch} --strip-components=1
RUN grep ES_DISTRIBUTION_TYPE=tar /usr/share/elasticsearch/bin/elasticsearch-env \
&& sed -ie 's/ES_DISTRIBUTION_TYPE=tar/ES_DISTRIBUTION_TYPE=docker/' /usr/share/elasticsearch/bin/elasticsearch-env
RUN mkdir -p config data logs
RUN chmod 0775 config data logs
COPY config/elasticsearch.yml config/log4j2.properties config/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ The following example shows the use of metadata and transforming dates into a re

[source,Painless]
----
POST _xpack/watcher/watch/_execute
POST _watcher/watch/_execute
{
"watch" : {
"metadata" : { "min_hits": 10000 },
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,15 @@ GET /_search
{
"aggs" : {
"genres" : {
"terms" : { "field" : "genre" }
"terms" : { "field" : "genre" } <1>
}
}
}
--------------------------------------------------
// CONSOLE
// TEST[s/_search/_search\?filter_path=aggregations/]
<1> `terms` aggregation should be a field of type `keyword` or any other data type suitable for bucket aggregations. In order to use it with `text` you will need to enable
<<fielddata, fielddata>>.

Response:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,8 @@ PUT /test_index
Using `synonyms_path` to define WordNet synonyms in a file is supported
as well.

=== Parsing synonym files
[float]
==== Parsing synonym files

Elasticsearch will use the token filters preceding the synonym filter
in a tokenizer chain to parse the entries in a synonym file. So, for example, if a
Expand All @@ -186,7 +187,7 @@ parsing synonyms, e.g. `asciifolding` will only produce the folded version of th
token. Others, e.g. `multiplexer`, `word_delimiter_graph` or `ngram` will throw an
error.

WARNING:The synonym rules should not contain words that are removed by
WARNING: The synonym rules should not contain words that are removed by
a filter that appears after in the chain (a `stop` filter for instance).
Removing a term from a synonym rule breaks the matching at query time.

Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Delete auto-follow pattern</titleabbrev>
++++

beta[]

Delete auto-follow patterns.

==== Description
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Get auto-follow pattern</titleabbrev>
++++

beta[]

Get auto-follow patterns.

==== Description
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Create auto-follow pattern</titleabbrev>
++++

beta[]

Creates an auto-follow pattern.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/ccr-apis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
[[ccr-apis]]
== Cross-cluster replication APIs

beta[]

You can use the following APIs to perform {ccr} operations.

[float]
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/follow/get-follow-info.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Get follower info</titleabbrev>
++++

beta[]

Retrieves information about all follower indices.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/follow/get-follow-stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Get follower stats</titleabbrev>
++++

beta[]

Get follower stats.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/follow/post-pause-follow.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Pause follower</titleabbrev>
++++

beta[]

Pauses a follower index.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/follow/post-resume-follow.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Resume follower</titleabbrev>
++++

beta[]

Resumes a follower index.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/follow/post-unfollow.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Unfollow</titleabbrev>
++++

beta[]

Converts a follower index to a regular index.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/follow/put-follow.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Create follower</titleabbrev>
++++

beta[]

Creates a follower index.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/apis/get-ccr-stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
<titleabbrev>Get CCR stats</titleabbrev>
++++

beta[]

Get {ccr} stats.

==== Description
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/ccr/auto-follow.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
[[ccr-auto-follow]]
=== Automatically following indices

beta[]

In time series use cases where you want to follow new indices that are
periodically created (such as daily Beats indices), manually configuring follower
indices for each new leader index can be an operational burden. The auto-follow
Expand Down
7 changes: 5 additions & 2 deletions docs/reference/ccr/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
[[ccr-getting-started]]
== Getting started with {ccr}

beta[]

This getting-started guide for {ccr} shows you how to:

* <<ccr-getting-started-remote-cluster,Connect a local cluster to a remote
Expand Down Expand Up @@ -261,6 +259,11 @@ PUT /server-metrics-copy/_ccr/follow?wait_for_active_shards=1
//////////////////////////

The follower index is initialized using the <<remote-recovery, remote recovery>>
process. The remote recovery process transfers the existing Lucene segment files
from the leader to the follower. When the remote recovery process is complete,
the index following begins.

Now when you index documents into your leader index, you will see these
documents replicated in the follower index. You can
inspect the status of replication using the
Expand Down
8 changes: 2 additions & 6 deletions docs/reference/ccr/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,7 @@
[partintro]
--

beta[]

WARNING: {ccr} is currently not supported by indices being managed by
{ref}/index-lifecycle-management.html[Index Lifecycle Management].

The {ccr} (CCR) feature enables replication of indices in remote clusters to a
The {ccr} (CCR) feature enables replication of indices in remote clusters to a
local cluster. This functionality can be used in some common production use
cases:

Expand All @@ -32,3 +27,4 @@ include::overview.asciidoc[]
include::requirements.asciidoc[]
include::auto-follow.asciidoc[]
include::getting-started.asciidoc[]
include::remote-recovery.asciidoc[]
2 changes: 0 additions & 2 deletions docs/reference/ccr/overview.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
[[ccr-overview]]
== Overview

beta[]

Cross-cluster replication is done on an index-by-index basis. Replication is
configured at the index level. For each configured replication there is a
replication source index called the _leader index_ and a replication target
Expand Down
29 changes: 29 additions & 0 deletions docs/reference/ccr/remote-recovery.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
[role="xpack"]
[testenv="platinum"]
[[remote-recovery]]
== Remote recovery

When you create a follower index, you cannot use it until it is fully initialized.
The _remote recovery_ process builds a new copy of a shard on a follower node by
copying data from the primary shard in the leader cluster. {es} uses this remote
recovery process to bootstrap a follower index using the data from the leader index.
This process provides the follower with a copy of the current state of the leader index,
even if a complete history of changes is not available on the leader due to Lucene
segment merging.

Remote recovery is a network intensive process that transfers all of the Lucene
segment files from the leader cluster to the follower cluster. The follower
requests that a recovery session be initiated on the primary shard in the leader
cluster. The follower then requests file chunks concurrently from the leader. By
default, the process concurrently requests `5` large `1mb` file chunks. This default
behavior is designed to support leader and follower clusters with high network latency
between them.

There are dynamic settings that you can use to rate-limit the transmitted data
and manage the resources consumed by remote recoveries. See
{ref}/ccr-settings.html[{ccr-cap} settings].

You can obtain information about an in-progress remote recovery by using the
{ref}/cat-recovery.html[recovery API] on the follower cluster. Remote recoveries
are implemented using the {ref}/modules-snapshots.html[snapshot and restore] infrastructure. This means that on-going remote recoveries are labelled as type
`snapshot` in the recovery API.
2 changes: 0 additions & 2 deletions docs/reference/ccr/requirements.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
[[ccr-requirements]]
=== Requirements for leader indices

beta[]

Cross-cluster replication works by replaying the history of individual write
operations that were performed on the shards of the leader index. This means that the
history of these operations needs to be retained on the leader shards so that
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/commands/setup-passwords.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
[[setup-passwords]]
== elasticsearch-setup-passwords

The `elasticsearch-setup-passwords` command sets the passwords for the built-in
`elastic`, `kibana`, `logstash_system`, `beats_system`, and `apm_system` users.
The `elasticsearch-setup-passwords` command sets the passwords for the
{stack-ov}/built-in-users.html[built-in users].

[float]
=== Synopsis
Expand Down
13 changes: 13 additions & 0 deletions docs/reference/release-notes/6.7.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,9 @@ Authorization::
CCR::
* Add ccr follow info api {pull}37408[#37408] (issue: {issue}37127[#37127])

CRUD::
* Make `_doc` work as an alias of the actual type of an index. {pull}39505[#39505] (issue: {issue}39469[#39469])

Features/ILM::
* [ILM] Add unfollow action {pull}36970[#36970] (issue: {issue}34648[#34648])

Expand Down Expand Up @@ -260,6 +263,7 @@ Infra/Scripting::
* Add getZone to JodaCompatibleZonedDateTime {pull}37084[#37084]

Infra/Settings::
* Provide a clearer error message on keystore add {pull}39327[#39327] (issue: {issue}39324[#39324])
* Separate out validation of groups of settings {pull}34184[#34184]

License::
Expand Down Expand Up @@ -298,6 +302,7 @@ Rollup::
* Replace the TreeMap in the composite aggregation {pull}36675[#36675]

SQL::
* SQL: Enhance checks for inexact fields {pull}39427[#39427] (issue: {issue}38501[#38501])
* SQL: change the default precision for CURRENT_TIMESTAMP function {pull}39391[#39391] (issue: {issue}39288[#39288])
* SQL: add "validate.properties" property to JDBC's allowed list of settings {pull}39050[#39050] (issue: {issue}38068[#38068])
* SQL: Allow look-ahead resolution of aliases for WHERE clause {pull}38450[#38450] (issue: {issue}29983[#29983])
Expand Down Expand Up @@ -456,6 +461,7 @@ Geo::
* Geo: Do not normalize the longitude with value -180 for Lucene shapes {pull}37299[#37299] (issue: {issue}37297[#37297])

Infra/Core::
* Correct name of basic_date_time_no_millis {pull}39367[#39367]
* Fix DateFormatters.parseMillis when no timezone is given {pull}39100[#39100] (issue: {issue}39067[#39067])
* Prefix java formatter patterns with '8' {pull}38712[#38712] (issue: {issue}38567[#38567])
* Bubble-up exceptions from scheduler {pull}38317[#38317] (issue: {issue}38014[#38014])
Expand Down Expand Up @@ -508,6 +514,10 @@ Recovery::
* RecoveryMonitor#lastSeenAccessTime should be volatile {pull}36781[#36781]

SQL::
* SQL: Fix merging of incompatible multi-fields {pull}39560[#39560] (issue: {issue}39547[#39547])
* SQL: fix COUNT DISTINCT column name {pull}39537[#39537] (issue: {issue}39511[#39511])
* SQL: ignore UNSUPPORTED fields for JDBC and ODBC modes in 'SYS COLUMNS' {pull}39518[#39518] (issue: {issue}39471[#39471])
* SQL: Use underlying exact field for LIKE/RLIKE {pull}39443[#39443] (issue: {issue}39442[#39442])
* SQL: enforce JDBC driver - ES server version parity {pull}38972[#38972] (issue: {issue}38775[#38775])
* SQL: fall back to using the field name for column label {pull}38842[#38842] (issue: {issue}38831[#38831])
* SQL: Prevent grouping over grouping functions {pull}38649[#38649] (issue: {issue}38308[#38308])
Expand All @@ -531,6 +541,7 @@ SQL::
* SQL: Fix issue with always false filter involving functions {pull}36830[#36830] (issue: {issue}35980[#35980])
* SQL: protocol returns ISO 8601 String formatted dates instead of Long for JDBC/ODBC requests {pull}36800[#36800] (issue: {issue}36756[#36756])
* SQL: Enhance Verifier to prevent aggregate or grouping functions from {pull}36799[#36799] (issue: {issue}36798[#36798])
* SQL: normalized keywords shouldn't be allowed for groupings and sorting [ISSUE] {pull}35203[#35203]

Search::
* Fix simple query string serialization conditional {pull}38960[#38960] (issues: {issue}21504[#21504], {issue}38889[#38889])
Expand All @@ -546,7 +557,9 @@ Security::
* Fix potential NPE in UsersTool {pull}37660[#37660]

Snapshot/Restore::
* Fix Concurrent Snapshot Ending And Stabilize Snapshot Finalization {pull}38368[#38368] (issue: {issue}38226[#38226])
* Fix Two Races that Lead to Stuck Snapshots {pull}37686[#37686] (issues: {issue}32265[#32265], {issue}32348[#32348])
* Fix Race in Concurrent Snapshot Delete and Create {pull}37612[#37612] (issue: {issue}37581[#37581])
* Streamline S3 Repository- and Client-Settings {pull}37393[#37393]
* SNAPSHOTS: Upgrade GCS Dependencies to 1.55.0 {pull}36634[#36634] (issues: {issue}35229[#35229], {issue}35459[#35459])

Expand Down
52 changes: 52 additions & 0 deletions docs/reference/settings/ccr-settings.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
[role="xpack"]
[[ccr-settings]]
=== {ccr-cap} settings

These {ccr} settings can be dynamically updated on a live cluster with the
<<cluster-update-settings,cluster update settings API>>.

[float]
[[ccr-recovery-settings]]
==== Remote recovery settings

The following setting can be used to rate-limit the data transmitted during
{stack-ov}/remote-recovery.html[remote recoveries]:

`ccr.indices.recovery.max_bytes_per_sec` (<<cluster-update-settings,Dynamic>>)::
Limits the total inbound and outbound remote recovery traffic on each node.
Since this limit applies on each node, but there may be many nodes performing
remote recoveries concurrently, the total amount of remote recovery bytes may be
much higher than this limit. If you set this limit too high then there is a risk
that ongoing remote recoveries will consume an excess of bandwidth (or other
resources) which could destabilize the cluster. This setting is used by both the
leader and follower clusters. For example if it is set to `20mb` on a leader,
the leader will only send `20mb/s` to the follower even if the follower is
requesting and can accept `60mb/s`. Defaults to `40mb`.

[float]
[[ccr-advanced-recovery-settings]]
==== Advanced remote recovery settings

The following _expert_ settings can be set to manage the resources consumed by
remote recoveries:

`ccr.indices.recovery.max_concurrent_file_chunks` (<<cluster-update-settings,Dynamic>>)::
Controls the number of file chunk requests that can be sent in parallel per
recovery. As multiple remote recoveries might already running in parallel,
increasing this expert-level setting might only help in situations where remote
recovery of a single shard is not reaching the total inbound and outbound remote recovery traffic as configured by `ccr.indices.recovery.max_bytes_per_sec`.
Defaults to `5`. The maximum allowed value is `10`.

`ccr.indices.recovery.chunk_size`(<<cluster-update-settings,Dynamic>>)::
Controls the chunk size requested by the follower during file transfer. Defaults to
`1mb`.

`ccr.indices.recovery.recovery_activity_timeout`(<<cluster-update-settings,Dynamic>>)::
Controls the timeout for recovery activity. This timeout primarily applies on
the leader cluster. The leader cluster must open resources in-memory to supply
data to the follower during the recovery process. If the leader does not receive recovery requests from the follower for this period of time, it will close the resources. Defaults to 60 seconds.

`ccr.indices.recovery.internal_action_timeout` (<<cluster-update-settings,Dynamic>>)::
Controls the timeout for individual network requests during the remote recovery
process. An individual action timing out can fail the recovery. Defaults to
60 seconds.
3 changes: 2 additions & 1 deletion docs/reference/settings/configuring-xes.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@
++++

include::{asciidoc-dir}/../../shared/settings.asciidoc[]
include::ccr-settings.asciidoc[]
include::license-settings.asciidoc[]
include::ml-settings.asciidoc[]
include::notification-settings.asciidoc[]
include::sql-settings.asciidoc[]
include::notification-settings.asciidoc[]
Loading

0 comments on commit 10a8554

Please sign in to comment.