Skip to content

Commit

Permalink
Merge pull request #244 from elastic/main
Browse files Browse the repository at this point in the history
🤖 ESQL: Merge upstream
  • Loading branch information
elasticsearchmachine committed Sep 16, 2022
2 parents 04eaa49 + 3603aa7 commit 9d21dfb
Show file tree
Hide file tree
Showing 65 changed files with 822 additions and 282 deletions.
2 changes: 1 addition & 1 deletion distribution/docker/build.gradle
Expand Up @@ -408,7 +408,7 @@ void addBuildEssDockerImageTask(Architecture architecture) {

from(projectDir.resolve("src/docker/Dockerfile.cloud-ess")) {
expand([
base_image: "elasticsearch${DockerBase.CLOUD.suffix}:${VersionProperties.elasticsearch}"
base_image: "elasticsearch${DockerBase.CLOUD.suffix}:${architecture.classifier}"
])
filter SquashNewlinesFilter
rename ~/Dockerfile\.cloud-ess$/, 'Dockerfile'
Expand Down
5 changes: 5 additions & 0 deletions docs/changelog/90064.yaml
@@ -0,0 +1,5 @@
pr: 90064
summary: Add support for predefined char class regexp on wildcard fields
area: Search
type: bug
issues: []
@@ -0,0 +1,40 @@
++++
<div class="tabs" data-tab-group="host">
<div role="tablist" aria-label="Addressing repeated snapshot policy failures">
<button role="tab"
aria-selected="true"
aria-controls="cloud-tab-repeated-snapshot-failures"
id="cloud-repeated-snapshot-failures">
Elasticsearch Service
</button>
<button role="tab"
aria-selected="false"
aria-controls="self-managed-tab-repeated-snapshot-failures"
id="self-managed-repeated-snapshot-failures"
tabindex="-1">
Self-managed
</button>
</div>
<div tabindex="0"
role="tabpanel"
id="cloud-tab-repeated-snapshot-failures"
aria-labelledby="cloud-repeated-snapshot-failures">
++++

include::repeated-snapshot-failures.asciidoc[tag=cloud]

++++
</div>
<div tabindex="0"
role="tabpanel"
id="self-managed-tab-repeated-snapshot-failures"
aria-labelledby="self-managed-repeated-snapshot-failures"
hidden="">
++++

include::repeated-snapshot-failures.asciidoc[tag=self-managed]

++++
</div>
</div>
++++
@@ -0,0 +1,172 @@
// tag::cloud[]
In order to check the status of failing {slm} policies we need to go to Kibana and retrieve the
<<slm-api-get-policy, Snapshot Lifecycle Policy information>>.

**Use {kib}**

//tag::kibana-api-ex[]
. Log in to the {ess-console}[{ecloud} console].
+

. On the **Elasticsearch Service** panel, click the name of your deployment.
+

NOTE: If the name of your deployment is disabled your {kib} instances might be
unhealthy, in which case please contact https://support.elastic.co[Elastic Support].
If your deployment doesn't include {kib}, all you need to do is
{cloud}/ec-access-kibana.html[enable it first].

. Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
and go to **Dev Tools > Console**.
+
[role="screenshot"]
image::images/kibana-console.png[{kib} Console,align="center"]

. <<slm-api-get-policy, Retrieve>> the {slm} policy:
+
[source,console]
----
GET _slm/policy/<affected-policy-name>
----
// TEST[skip:These policies do not exist]
+
The response will look like this:
+
[source,console-result]
----
{
"affected-policy-name": { <1>
"version": 1,
"modified_date": "2099-05-06T01:30:00.000Z",
"modified_date_millis": 4081757400000,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"last_success" : {
"snapshot_name" : "daily-snap-2099.05.30-tme_ivjqswgkpryvnao2lg",
"start_time" : 4083782400000,
"time" : 4083782400000
},
"last_failure" : { <2>
"snapshot_name" : "daily-snap-2099.06.16-ywe-kgh5rfqfrpnchvsujq",
"time" : 4085251200000, <3>
"details" : """{"type":"snapshot_exception","reason":"[daily-snap-2099.06.16-ywe-kgh5rfqfrpnchvsujq] failed to create snapshot successfully, 5 out of 149 total shards failed"}""" <4>
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 0,
"snapshots_failed": 0,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"next_execution": "2099-06-17T01:30:00.000Z",
"next_execution_millis": 4085343000000
}
}
----
// TESTRESPONSE[skip:the result is for illustrating purposes only]
<1> The affected snapshot lifecycle policy.
<2> The information about the last failure for the policy.
<3> The time when the failure occurred in millis. Use the `human=true` request parameter to see a formatted timestamp.
<4> Error details containing the reason for the snapshot failure.
+
Snapshots can fail for a variety reasons. If the failures are due to configuration errors, consult the
documentation for the repository that the automated snapshots are using. Refer to the
https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-repositories.html[guide on managing repositories in ECE]
if you are using such a deployment.
One common failure scenario is repository corruption. This occurs most often when multiple instances of {es} write to
the same repository location. There is a <<add-repository, separate troubleshooting guide>> to fix this problem.
In the event that snapshots are failing for other reasons check the logs on the elected master node during the snapshot
execution period for more information.
//end::kibana-api-ex[]
// end::cloud[]
// tag::self-managed[]
<<slm-api-get-policy, Retrieve>> the {slm} policy:
[source,console]
----
GET _slm/policy/<affected-policy-name>
----
// TEST[skip:These policies do not exist]
The response will look like this:
[source,console-result]
----
{
"affected-policy-name": { <1>
"version": 1,
"modified_date": "2099-05-06T01:30:00.000Z",
"modified_date_millis": 4081757400000,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"last_success" : {
"snapshot_name" : "daily-snap-2099.05.30-tme_ivjqswgkpryvnao2lg",
"start_time" : 4083782400000,
"time" : 4083782400000
},
"last_failure" : { <2>
"snapshot_name" : "daily-snap-2099.06.16-ywe-kgh5rfqfrpnchvsujq",
"time" : 4085251200000, <3>
"details" : """{"type":"snapshot_exception","reason":"[daily-snap-2099.06.16-ywe-kgh5rfqfrpnchvsujq] failed to create snapshot successfully, 5 out of 149 total shards failed"}""" <4>
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 0,
"snapshots_failed": 0,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"next_execution": "2099-06-17T01:30:00.000Z",
"next_execution_millis": 4085343000000
}
}
----
// TESTRESPONSE[skip:the result is for illustrating purposes only]
<1> The affected snapshot lifecycle policy.
<2> The information about the last failure for the policy.
<3> The time when the failure occurred in millis. Use the `human=true` request parameter to see a formatted timestamp.
<4> Error details containing the reason for the snapshot failure.
Snapshots can fail for a variety reasons. If the failures are due to configuration errors, consult the
documentation for the repository that the automated snapshots are using.
One common failure scenario is repository corruption. This occurs most often when multiple instances of {es} write to
the same repository location. There is a <<add-repository, separate troubleshooting guide>> to fix this problem.
In the event that snapshots are failing for other reasons check the logs on the elected master node during the snapshot
execution period for more information.
// end::self-managed[]
5 changes: 4 additions & 1 deletion docs/reference/troubleshooting.asciidoc
Expand Up @@ -36,6 +36,7 @@ fix problems that an {es} deployment might encounter.
=== Snapshot and restore
* <<restore-from-snapshot,Restore data from snapshot>>
* <<add-repository,Multiple deployments writing to the same snapshot repository>>
* <<repeated-snapshot-failures,Troubleshooting repeated snapshot failures>>

[discrete]
[[troubleshooting-others]]
Expand Down Expand Up @@ -97,6 +98,8 @@ include::troubleshooting/data/restore-from-snapshot.asciidoc[]

include::troubleshooting/snapshot/add-repository.asciidoc[]

include::troubleshooting/snapshot/repeated-snapshot-failures.asciidoc[]

include::troubleshooting/discovery-issues.asciidoc[]

include::monitoring/troubleshooting.asciidoc[]
Expand All @@ -105,4 +108,4 @@ include::transform/troubleshooting.asciidoc[leveloffset=+1]

include::../../x-pack/docs/en/watcher/troubleshooting.asciidoc[]

include::troubleshooting/troubleshooting-searches.asciidoc[]
include::troubleshooting/troubleshooting-searches.asciidoc[]
@@ -0,0 +1,18 @@
[[repeated-snapshot-failures]]
== Addressing repeated snapshot policy failures

Repeated snapshot failures are usually an indicator of a problem with your deployment. Continuous failures of automated
snapshots can leave a deployment without recovery options in cases of data loss or outages.

Elasticsearch keeps track of the number of repeated failures when executing automated snapshots. If an automated
snapshot fails too many times without a successful execution, the health API will report a warning. The number of
repeated failures before reporting a warning is controlled by the
<<slm-health-failed-snapshot-warn-threshold,`slm.health.failed_snapshot_warn_threshold`>> setting.

In the event that an automated {slm} policy execution is experiencing repeated failures, follow these steps to get more
information about the problem:

include::{es-repo-dir}/tab-widgets/troubleshooting/snapshot/repeated-snapshot-failures-widget.asciidoc[]



10 changes: 10 additions & 0 deletions docs/reference/upgrade.asciidoc
@@ -1,6 +1,16 @@
[[setup-upgrade]]
= Upgrade {es}

ifeval::["{release-state}"!="released"]
[[upgrade-pre-release]]
IMPORTANT: This documentation is for {es} {version}, which is not yet released.
You can upgrade from a previously released version to a pre-release build, if
following a supported upgrade path. Upgrading from a pre-release build to any
other build is not supported, and can result in errors or silent data loss. If
you run a pre-release build for testing, discard the contents of the cluster
before upgrading to another build of {es}.
endif::[]

{es} clusters can usually be upgraded one node at a time so upgrading does not
interrupt service. For upgrade instructions, refer to
{stack-ref}/upgrading-elastic-stack.html[Upgrading to Elastic {version}].
Expand Down
Expand Up @@ -117,11 +117,7 @@ static Map<String, IndexFieldCapabilities> retrieveFieldCaps(
if (filter.test(ft)) {
IndexFieldCapabilities fieldCap = new IndexFieldCapabilities(
field,
// This is a nasty hack so that we expose aggregate_metric_double field,
// when the index is a time series index and the field is marked as metric.
// This code should be reverted once PR https://github.com/elastic/elasticsearch/pull/87849
// is merged.
isTimeSeriesIndex && ft.getMetricType() != null ? ft.typeName() : ft.familyTypeName(),
ft.familyTypeName(),
context.isMetadataField(field),
ft.isSearchable(),
ft.isAggregatable(),
Expand Down
Expand Up @@ -729,8 +729,10 @@ public Builder customs(Map<String, Custom> customs) {
return this;
}

public Builder fromDiff(boolean fromDiff) {
this.fromDiff = fromDiff;
// set previous cluster state that this builder is created from during diff application
private Builder fromDiff(ClusterState previous) {
this.fromDiff = true;
this.previous = previous;
return this;
}

Expand Down Expand Up @@ -901,7 +903,7 @@ public ClusterState apply(ClusterState state) {
builder.metadata(metadata.apply(state.metadata));
builder.blocks(blocks.apply(state.blocks));
builder.customs(customs.apply(state.customs));
builder.fromDiff(true);
builder.fromDiff(state);
return builder.build();
}
}
Expand Down
Expand Up @@ -235,6 +235,7 @@ public Alias(AliasMetadata aliasMetadata, List<IndexMetadata> indexMetadatas) {
}
isSystem = isSystem && imd.isSystem();
}
this.referenceIndices.sort(Index.COMPARE_BY_NAME);

if (widx == null && indexMetadatas.size() == 1 && indexMetadatas.get(0).getAliases().get(aliasName).writeIndex() == null) {
widx = indexMetadatas.get(0).getIndex();
Expand Down
Expand Up @@ -1250,6 +1250,10 @@ public Metadata apply(Metadata part) {
builder.templates(templates.apply(part.templates));
builder.customs(customs.apply(part.customs));
builder.put(reservedStateMetadata.apply(part.reservedStateMetadata));
if (part.indices == updatedIndices
&& builder.dataStreamMetadata() == part.custom(DataStreamMetadata.TYPE, DataStreamMetadata.EMPTY)) {
builder.previousIndicesLookup = part.indicesLookup;
}
return builder.build(true);
}
}
Expand Down
4 changes: 4 additions & 0 deletions server/src/main/java/org/elasticsearch/index/Index.java
Expand Up @@ -19,6 +19,7 @@
import org.elasticsearch.xcontent.XContentParser;

import java.io.IOException;
import java.util.Comparator;
import java.util.Objects;

/**
Expand All @@ -27,6 +28,9 @@
public class Index implements Writeable, ToXContentObject {

public static final Index[] EMPTY_ARRAY = new Index[0];

public static Comparator<Index> COMPARE_BY_NAME = Comparator.comparing(Index::getName);

private static final String INDEX_UUID_KEY = "index_uuid";
private static final String INDEX_NAME_KEY = "index_name";
private static final ObjectParser<Builder, Void> INDEX_PARSER = new ObjectParser<>("index", Builder::new);
Expand Down

0 comments on commit 9d21dfb

Please sign in to comment.