Skip to content

Commit

Permalink
Merge remote-tracking branch 'es/main' into rollup/disallow_new_usages
Browse files Browse the repository at this point in the history
  • Loading branch information
martijnvg committed May 17, 2024
2 parents 3b2b145 + 458e147 commit 2b912b9
Show file tree
Hide file tree
Showing 175 changed files with 2,191 additions and 1,090 deletions.
6 changes: 6 additions & 0 deletions docs/changelog/108417.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 108417
summary: Track source for arrays of objects
area: Mapping
type: enhancement
issues:
- 90708
5 changes: 5 additions & 0 deletions docs/changelog/108607.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 108607
summary: Specify parse index when error occurs on multiple datetime parses
area: Infra/Core
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/108713.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 108713
summary: Rewrite away type converting functions that do not convert types
area: ES|QL
type: enhancement
issues:
- 107716
5 changes: 5 additions & 0 deletions docs/changelog/108726.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 108726
summary: Allow RA metrics to be reported upon parsing completed or accumulated
area: Infra/Metrics
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/108761.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 108761
summary: Add some missing timeout params to REST API specs
area: Infra/REST API
type: bug
issues: []
78 changes: 59 additions & 19 deletions docs/reference/esql/esql-kibana.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,28 @@ queries, load the "Sample web logs" sample data set by clicking *Try sample
data* from the {kib} Home, selecting *Other sample data sets*, and clicking *Add
data* on the *Sample web logs* card.

[discrete]
[[esql-kibana-enable]]
=== Enable or disable {esql}

{esql} is enabled by default in {kib}. It can be
disabled using the `enableESQL` setting from the
{kibana-ref}/advanced-options.html[Advanced Settings].

This will hide the {esql} user interface from various applications.
However, users will be able to access existing {esql} artifacts like saved searches and visualizations.

[discrete]
[[esql-kibana-get-started]]
=== Get started with {esql}

// tag::esql-mode[]
To get started with {esql} in Discover, open the main menu and select
*Discover*. Next, from the Data views menu, select *Try ES|QL*.
*Discover*. Next, from the Data views menu, select *Language: ES|QL*.

image::images/esql/esql-data-view-menu.png[align="center",width=33%]
// end::esql-mode[]

The ability to select {esql} from the Data views menu can be enabled and
disabled using the `discover:enableESQL` setting from
{kibana-ref}/advanced-options.html[Advanced Settings].

[discrete]
[[esql-kibana-query-bar]]
=== The query bar
Expand All @@ -47,7 +54,7 @@ A source command can be followed by one or more <<esql-commands,processing
commands>>. In this query, the processing command is <<esql-limit>>. `LIMIT`
limits the number of rows that are retrieved.

TIP: Click the help icon (image:images/esql/esql-icon-help.svg[]) to open the
TIP: Click the help icon (image:images/esql/esql-icon-help.svg[Static,20]) to open the
in-product reference documentation for all commands and functions.

// tag::autocomplete[]
Expand Down Expand Up @@ -98,6 +105,19 @@ A query may result in warnings, for example when querying an unsupported field
type. When that happens, a warning symbol is shown in the query bar. To see the
detailed warning, expand the query bar, and click *warnings*.

[discrete]
[[esql-kibana-query-history]]
==== Query history

You can reuse your recent {esql} queries in the query bar.
In the query bar click *Show recent queries*:

image::images/esql/esql-discover-show-recent-query.png[align="center",size="50%"]

You can then scroll through your recent queries:

image::images/esql/esql-discover-query-history.png[align="center",size="50%"]

[discrete]
[[esql-kibana-results-table]]
=== The results table
Expand Down Expand Up @@ -170,7 +190,7 @@ FROM kibana_sample_data_logs
=== Analyze and visualize data

Between the query bar and the results table, Discover shows a date histogram
visualization. If the indices you're querying do not contain an `@timestamp`
visualization. If the indices you're querying do not contain a `@timestamp`
field, the histogram is not shown.

The visualization adapts to the query. A query's nature determines the type of
Expand All @@ -189,24 +209,39 @@ The resulting visualization is a bar chart showing the top 3 countries:

image::images/esql/esql-kibana-bar-chart.png[align="center"]

To change the visualization into another type, click the visualization type
dropdown:

image::images/esql/esql-kibana-visualization-type.png[align="center",width=33%]

To make other changes to the visualization, like the axes and colors, click the
To make changes to the visualization, like changing the visualization type, axes and colors, click the
pencil button (image:images/esql/esql-icon-edit-visualization.svg[]). This opens
an in-line editor:

image::images/esql/esql-kibana-in-line-editor.png[align="center"]
image::images/esql/esql-kibana-in-line-editor.png[align="center",width=66%]

You can save the visualization to a new or existing dashboard by clicking the
save button (image:images/esql/esql-icon-save-visualization.svg[]). Once saved
to a dashboard, you can continue to make changes to visualization. Click the
to a dashboard, you'll be taken to the Dashboards page. You can continue to
make changes to the visualization. Click the
options button in the top-right (image:images/esql/esql-icon-options.svg[]) and
select *Edit ESQL visualization* to open the in-line editor:

image::images/esql/esql-kibana-edit-on-dashboard.png[align="center"]
image::images/esql/esql-kibana-edit-on-dashboard.png[align="center",width=66%]

[discrete]
[[esql-kibana-dashboard-panel]]
==== Add a panel to a dashboard

You can use {esql} queries to create panels on your dashboards.
To add a panel to a dashboard, under *Dashboards*, click the *Add panel* button and select {esql}.

image::images/esql/esql-dashboard-panel.png[align="center",width=50%]

Check the {esql} query by clicking the Panel filters button (image:images/esql/dashboard_panel_filter_button.png[Panel filters button on panel header]):

image::images/esql/esql-dashboard-panel-query.png[align="center",width=50%]

You can also edit the {esql} visualization from here.
Click the options button in the top-right (image:images/esql/esql-icon-options.svg[]) and
select *Edit ESQL visualization* to open the in-line editor.

image::images/esql/esql-dashboard-panel-edit-visualization.png[align="center",width=50%]

[discrete]
[[esql-kibana-enrich]]
Expand All @@ -233,7 +268,14 @@ Finally, click *Create and execute*.

Now, you can use the enrich policy in an {esql} query:

image::images/esql/esql-kibana-enriched-data.png[align="center"]
[source,esql]
----
FROM kibana_sample_data_logs
| STATS total_bytes = SUM(bytes) BY geo.dest
| SORT total_bytes DESC
| LIMIT 3
| ENRICH countries
----

[discrete]
[[esql-kibana-alerting-rule]]
Expand All @@ -254,8 +296,6 @@ image::images/esql/esql-kibana-create-rule.png[align="center",width=50%]
* The user interface to filter data is not enabled when Discover is in {esql}
mode. To filter data, write a query that uses the <<esql-where>> command
instead.
* In {esql} mode, clicking a field in the field list in Discover does not show
quick statistics for that field.
* Discover shows no more than 10,000 rows. This limit only applies to the number
of rows that are retrieved by the query and displayed in Discover. Queries and
aggregations run on the full data set.
Expand Down
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-data-view-menu.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-expanded-query-bar.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 15 additions & 1 deletion docs/reference/images/esql/esql-icon-help.svg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-kibana-auto-complete.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-kibana-bar-chart.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-kibana-edit-on-dashboard.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-kibana-enrich-autocomplete.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/reference/images/esql/esql-kibana-in-line-editor.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
39 changes: 21 additions & 18 deletions docs/reference/searchable-snapshots/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -303,30 +303,33 @@ Because {search-snap} indices are not regular indices, it is not possible to use
a <<snapshots-source-only-repository,source-only repository>> to take snapshots
of {search-snap} indices.

[discrete]
[[searchable-snapshots-reliability]]
=== Reliability of {search-snaps}

[WARNING]
.Reliability of {search-snaps}
====
The sole copy of the data in a {search-snap} index is the underlying snapshot,
stored in the repository. For example:
stored in the repository. If you remove this snapshot, the data will be
permanently lost. Although {es} may have cached some of the data onto local
storage for faster searches, this cached data is incomplete and cannot be used
for recovery if you remove the underlying snapshot. For example:
* You must not unregister a repository while any of the {search-snaps} it
contains are mounted in {es}.
* You must not unregister a repository while any of the searchable snapshots it
contains are mounted in {es}. You also must not delete a snapshot if any of its
indices are mounted as searchable snapshots.
* You must not delete a snapshot if any of its indices are mounted as
{search-snap} indices. The snapshot contains the sole full copy of your data. If
you delete it then the data cannot be recovered from elsewhere.
* If you mount indices from snapshots held in a repository to which a different
cluster has write access then you must make sure that the other cluster does not
delete these snapshots.

* If you delete a snapshot while it is mounted as a searchable snapshot then the
data is lost. Similarly, if the repository fails or corrupts the contents of the
snapshot then the data is lost.

* Although {es} may have cached the data onto local storage, these caches may be
incomplete and cannot be used to recover any data after a repository failure.
You must make sure that your repository is reliable and protects against
corruption of your data while it is at rest in the repository.
delete these snapshots. The snapshot contains the sole full copy of your data.
If you delete it then the data cannot be recovered from elsewhere.
* If the repository fails or corrupts the contents of the snapshot and you
cannot restore it to its previous healthy state then the data is permanently
lost.
+
The blob storage offered by all major public cloud providers typically offers
very good protection against data loss or corruption. If you manage your own
very good protection against failure or corruption. If you manage your own
repository storage then you are responsible for its reliability.
====
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@
import org.elasticsearch.action.admin.indices.template.put.TransportPutComposableIndexTemplateAction;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest;
import org.elasticsearch.action.fieldcaps.FieldCapabilitiesResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.search.SearchRequest;
Expand Down Expand Up @@ -70,6 +72,7 @@ public class TSDBPassthroughIndexingIT extends ESSingleNodeTestCase {
},
"attributes": {
"type": "passthrough",
"priority": 0,
"dynamic": true,
"time_series_dimension": true
},
Expand Down Expand Up @@ -197,31 +200,20 @@ public void testIndexingGettingAndSearching() throws Exception {
assertMap(attributes.get("pod.ip"), matchesMap().entry("type", "ip").entry("time_series_dimension", true));
assertMap(attributes.get("pod.uid"), matchesMap().entry("type", "keyword").entry("time_series_dimension", true));
assertMap(attributes.get("pod.name"), matchesMap().entry("type", "keyword").entry("time_series_dimension", true));
// alias field mappers:
assertMap(
ObjectPath.eval("properties.metricset", mapping),
matchesMap().entry("type", "alias").entry("path", "attributes.metricset")
);
assertMap(
ObjectPath.eval("properties.number.properties.long", mapping),
matchesMap().entry("type", "alias").entry("path", "attributes.number.long")
);
assertMap(
ObjectPath.eval("properties.number.properties.double", mapping),
matchesMap().entry("type", "alias").entry("path", "attributes.number.double")
);
assertMap(
ObjectPath.eval("properties.pod.properties", mapping),
matchesMap().extraOk().entry("name", matchesMap().entry("type", "alias").entry("path", "attributes.pod.name"))
);
assertMap(
ObjectPath.eval("properties.pod.properties", mapping),
matchesMap().extraOk().entry("uid", matchesMap().entry("type", "alias").entry("path", "attributes.pod.uid"))
);
assertMap(
ObjectPath.eval("properties.pod.properties", mapping),
matchesMap().extraOk().entry("ip", matchesMap().entry("type", "alias").entry("path", "attributes.pod.ip"))
);

FieldCapabilitiesResponse fieldCaps = client().fieldCaps(new FieldCapabilitiesRequest().fields("*").indices("k8s")).actionGet();
assertTrue(fieldCaps.getField("attributes.metricset").get("keyword").isDimension());
assertTrue(fieldCaps.getField("metricset").get("keyword").isDimension());
assertTrue(fieldCaps.getField("attributes.number.long").get("long").isDimension());
assertTrue(fieldCaps.getField("number.long").get("long").isDimension());
assertTrue(fieldCaps.getField("attributes.number.double").get("float").isDimension());
assertTrue(fieldCaps.getField("number.double").get("float").isDimension());
assertTrue(fieldCaps.getField("attributes.pod.ip").get("ip").isDimension());
assertTrue(fieldCaps.getField("pod.ip").get("ip").isDimension());
assertTrue(fieldCaps.getField("attributes.pod.uid").get("keyword").isDimension());
assertTrue(fieldCaps.getField("pod.uid").get("keyword").isDimension());
assertTrue(fieldCaps.getField("attributes.pod.name").get("keyword").isDimension());
assertTrue(fieldCaps.getField("pod.name").get("keyword").isDimension());
}

public void testIndexingGettingAndSearchingShrunkIndex() throws Exception {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import java.util.List;

import static org.elasticsearch.rest.RestRequest.Method.PUT;
import static org.elasticsearch.rest.RestUtils.getAckTimeout;
import static org.elasticsearch.rest.RestUtils.getMasterNodeTimeout;

@ServerlessScope(Scope.PUBLIC)
Expand All @@ -43,7 +44,7 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli
PutDataStreamLifecycleAction.Request putLifecycleRequest = PutDataStreamLifecycleAction.Request.parseRequest(parser);
putLifecycleRequest.indices(Strings.splitStringByCommaToArray(request.param("name")));
putLifecycleRequest.masterNodeTimeout(getMasterNodeTimeout(request));
putLifecycleRequest.ackTimeout(request.paramAsTime("timeout", putLifecycleRequest.ackTimeout()));
putLifecycleRequest.ackTimeout(getAckTimeout(request));
putLifecycleRequest.indicesOptions(IndicesOptions.fromRequest(request, putLifecycleRequest.indicesOptions()));
return channel -> client.execute(
PutDataStreamLifecycleAction.INSTANCE,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
import java.util.List;

import static org.elasticsearch.rest.RestRequest.Method.POST;
import static org.elasticsearch.rest.RestUtils.getAckTimeout;
import static org.elasticsearch.rest.RestUtils.getMasterNodeTimeout;

@ServerlessScope(Scope.PUBLIC)
Expand All @@ -45,7 +46,7 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli
throw new IllegalArgumentException("no data stream actions specified, at least one must be specified");
}
modifyDsRequest.masterNodeTimeout(getMasterNodeTimeout(request));
modifyDsRequest.ackTimeout(request.paramAsTime("timeout", modifyDsRequest.ackTimeout()));
modifyDsRequest.ackTimeout(getAckTimeout(request));
return channel -> client.execute(ModifyDataStreamsAction.INSTANCE, modifyDsRequest, new RestToXContentListener<>(channel));
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ public void setup() throws Exception {
new MetadataFieldMapper[] { dtfm },
Collections.emptyMap()
);
MappingLookup mappingLookup = MappingLookup.fromMappers(mapping, List.of(dtfm, dateFieldMapper), List.of(), List.of());
MappingLookup mappingLookup = MappingLookup.fromMappers(mapping, List.of(dtfm, dateFieldMapper), List.of());
indicesService = DataStreamTestHelper.mockIndicesServices(mappingLookup);
}

Expand Down

0 comments on commit 2b912b9

Please sign in to comment.