Skip to content

Commit

Permalink
Merge commit '550b7c9' into add_cache_read_on_buffer
Browse files Browse the repository at this point in the history
  • Loading branch information
Tim-Brooks committed May 3, 2024
2 parents 769292d + 550b7c9 commit 5495006
Show file tree
Hide file tree
Showing 215 changed files with 5,338 additions and 1,791 deletions.
2 changes: 2 additions & 0 deletions .buildkite/hooks/pre-command.bat
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,6 @@ set JOB_BRANCH=%BUILDKITE_BRANCH%
set GRADLE_BUILD_CACHE_USERNAME=vault read -field=username secret/ci/elastic-elasticsearch/migrated/gradle-build-cache
set GRADLE_BUILD_CACHE_PASSWORD=vault read -field=password secret/ci/elastic-elasticsearch/migrated/gradle-build-cache

bash.exe -c "nohup bash .buildkite/scripts/setup-monitoring.sh </dev/null >/dev/null 2>&1 &"

exit /b 0
45 changes: 36 additions & 9 deletions .buildkite/scripts/setup-monitoring.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,50 @@

set -euo pipefail

AGENT_VERSION="8.10.1"

ELASTIC_AGENT_URL=$(vault read -field=url secret/ci/elastic-elasticsearch/elastic-agent-token)
ELASTIC_AGENT_TOKEN=$(vault read -field=token secret/ci/elastic-elasticsearch/elastic-agent-token)

if [[ ! -d /opt/elastic-agent ]]; then
sudo mkdir /opt/elastic-agent
sudo chown -R buildkite-agent:buildkite-agent /opt/elastic-agent
cd /opt/elastic-agent
ELASTIC_AGENT_DIR=/opt/elastic-agent
IS_WINDOWS=""

# Windows
if uname -a | grep -q MING; then
ELASTIC_AGENT_DIR=/c/elastic-agent
IS_WINDOWS="true"

# Make sudo a no-op on Windows
sudo() {
"$@"
}
fi

if [[ ! -d $ELASTIC_AGENT_DIR ]]; then
sudo mkdir $ELASTIC_AGENT_DIR

archive=elastic-agent-8.10.1-linux-x86_64.tar.gz
if [ "$(uname -m)" = "arm64" ] || [ "$(uname -m)" = "aarch64" ]; then
archive=elastic-agent-8.10.1-linux-arm64.tar.gz
if [[ "$IS_WINDOWS" != "true" ]]; then
sudo chown -R buildkite-agent:buildkite-agent $ELASTIC_AGENT_DIR
fi

cd $ELASTIC_AGENT_DIR

archive="elastic-agent-$AGENT_VERSION-linux-x86_64.tar.gz"
if [[ "$IS_WINDOWS" == "true" ]]; then
archive="elastic-agent-$AGENT_VERSION-windows-x86_64.zip"
elif [ "$(uname -m)" = "arm64" ] || [ "$(uname -m)" = "aarch64" ]; then
archive="elastic-agent-$AGENT_VERSION-linux-arm64.tar.gz"
fi

curl -L -O "https://artifacts.elastic.co/downloads/beats/elastic-agent/$archive"

tar xzf "$archive" --directory=. --strip-components=1
if [[ "$IS_WINDOWS" == "true" ]]; then
unzip "$archive"
mv elastic-agent-*/* .
else
tar xzf "$archive" --directory=. --strip-components=1
fi
fi

cd /opt/elastic-agent
cd $ELASTIC_AGENT_DIR
sudo ./elastic-agent install -f --url="$ELASTIC_AGENT_URL" --enrollment-token="$ELASTIC_AGENT_TOKEN"
2 changes: 1 addition & 1 deletion build-tools-internal/version.properties
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ log4j = 2.19.0
slf4j = 2.0.6
ecsLogging = 1.2.0
jna = 5.12.1
netty = 4.1.107.Final
netty = 4.1.109.Final
commons_lang3 = 3.9
google_oauth_client = 1.34.1

Expand Down
3 changes: 2 additions & 1 deletion distribution/tools/geoip-cli/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,6 @@ dependencies {
compileOnly project(":libs:elasticsearch-cli")
compileOnly project(":libs:elasticsearch-x-content")
testImplementation project(":test:framework")
testImplementation "org.apache.commons:commons-compress:1.24.0"
testImplementation "org.apache.commons:commons-compress:1.26.1"
testImplementation "commons-io:commons-io:2.15.1"
}
5 changes: 5 additions & 0 deletions docs/changelog/107493.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107493
summary: Remote cluster - API key security model - cluster privileges
area: Security
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/108089.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 108089
summary: "ES|QL: limit query depth to 500 levels"
area: ES|QL
type: bug
issues:
- 107752
6 changes: 6 additions & 0 deletions docs/changelog/108101.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 108101
summary: "ESQL: Fix error message when failing to resolve aggregate groupings"
area: ES|QL
type: bug
issues:
- 108053
6 changes: 6 additions & 0 deletions docs/changelog/108106.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 108106
summary: Simulate should succeed if `ignore_missing_pipeline`
area: Ingest Node
type: bug
issues:
- 107314
5 changes: 5 additions & 0 deletions docs/changelog/108144.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 108144
summary: Bump Tika dependency to 2.9.2
area: Ingest Node
type: upgrade
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/108155.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 108155
summary: Upgrade to Netty 4.1.109
area: Network
type: upgrade
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/108165.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 108165
summary: Add `BlockHash` for 3 `BytesRefs`
area: ES|QL
type: enhancement
issues: []
38 changes: 20 additions & 18 deletions docs/reference/ilm/ilm-tutorial.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,31 +7,33 @@
++++

When you continuously index timestamped documents into {es},
you typically use a <<data-streams, data stream>> so you can periodically roll over to a
you typically use a <<data-streams,data stream>> so you can periodically <<index-rollover,roll over>> to a
new index.
This enables you to implement a hot-warm-cold architecture to meet your performance
This enables you to implement a <<data-tiers,hot-warm-cold architecture>> to meet your performance
requirements for your newest data, control costs over time, enforce retention policies,
and still get the most out of your data.

TIP: Data streams are best suited for
TIP: <<data-streams,Data streams>> are best suited for
<<data-streams-append-only,append-only>> use cases. If you need to update or delete existing time
series data, you can perform update or delete operations directly on the data stream backing index.
If you frequently send multiple documents using the same `_id` expecting last-write-wins, you may
want to use an index alias with a write index instead. You can still use ILM to manage and rollover
want to use an index alias with a write index instead. You can still use <<index-lifecycle-management,ILM>> to manage and <<index-rollover,roll over>>
the alias's indices. Skip to <<manage-time-series-data-without-data-streams>>.

[discrete]
[[manage-time-series-data-with-data-streams]]
=== Manage time series data with data streams

To automate rollover and management of a data stream with {ilm-init}, you:

. <<ilm-gs-create-policy, Create a lifecycle policy>> that defines the appropriate
phases and actions.
. <<ilm-gs-apply-policy, Create an index template>> to create the data stream and
<<ilm-index-lifecycle,phases>> and <<ilm-actions,actions>>.
. <<ilm-gs-apply-policy, Create an index template>> to <<ilm-gs-create-the-data-stream,create the data stream>> and
apply the ILM policy and the indices settings and mappings configurations for the backing
indices.
. <<ilm-gs-check-progress, Verify indices are moving through the lifecycle phases>>
as expected.

For an introduction to rolling indices, see <<index-rollover>>.

IMPORTANT: When you enable {ilm} for {beats} or the {ls} {es} output plugin,
lifecycle policies are set up automatically.
You do not need to take any other actions.
Expand All @@ -41,7 +43,7 @@ or the {ilm-init} APIs.

[discrete]
[[ilm-gs-create-policy]]
=== Create a lifecycle policy
==== Create a lifecycle policy

A lifecycle policy specifies the phases in the index lifecycle
and the actions to perform in each phase. A lifecycle can have up to five phases:
Expand Down Expand Up @@ -101,7 +103,7 @@ PUT _ilm/policy/timeseries_policy

[discrete]
[[ilm-gs-apply-policy]]
=== Create an index template to create the data stream and apply the lifecycle policy
==== Create an index template to create the data stream and apply the lifecycle policy

To set up a data stream, first create an index template to specify the lifecycle policy. Because
the template is for a data stream, it must also include a `data_stream` definition.
Expand Down Expand Up @@ -148,7 +150,7 @@ PUT _index_template/timeseries_template

[discrete]
[[ilm-gs-create-the-data-stream]]
=== Create the data stream
==== Create the data stream

To get things started, index a document into the name or wildcard pattern defined
in the `index_patterns` of the <<index-templates,index template>>. As long
Expand Down Expand Up @@ -184,12 +186,12 @@ stream's write index.
This process repeats each time a rollover condition is met.
You can search across all of the data stream's backing indices, managed by the `timeseries_policy`,
with the `timeseries` data stream name.
You will point ingest towards the alias which will route write operations to its current write index. Read operations will be handled by all
backing indices.
Write operations should be sent to the data stream name, which will route them to its current write index.
Read operations against the data stream will be handled by all its backing indices.

[discrete]
[[ilm-gs-check-progress]]
=== Check lifecycle progress
==== Check lifecycle progress

To get status information for managed indices, you use the {ilm-init} explain API.
This lets you find out things like:
Expand Down Expand Up @@ -304,7 +306,7 @@ as expected.

[discrete]
[[ilm-gs-alias-apply-policy]]
=== Create an index template to apply the lifecycle policy
==== Create an index template to apply the lifecycle policy

To automatically apply a lifecycle policy to the new write index on rollover,
specify the policy in the index template used to create new indices.
Expand Down Expand Up @@ -362,7 +364,7 @@ DELETE _index_template/timeseries_template

[discrete]
[[ilm-gs-alias-bootstrap]]
=== Bootstrap the initial time series index with a write index alias
==== Bootstrap the initial time series index with a write index alias

To get things started, you need to bootstrap an initial index and
designate it as the write index for the rollover alias specified in your index template.
Expand Down Expand Up @@ -393,11 +395,11 @@ This matches the `timeseries-*` pattern, so the settings from `timeseries_templa

This process repeats each time rollover conditions are met.
You can search across all of the indices managed by the `timeseries_policy` with the `timeseries` alias.
Write operations are routed to the current write index.
Write operations should be sent towards the alias, which will route them to its current write index.

[discrete]
[[ilm-gs-alias-check-progress]]
=== Check lifecycle progress
==== Check lifecycle progress

Retrieving the status information for managed indices is very similar to the data stream case.
See the data stream <<ilm-gs-check-progress, check progress section>> for more information.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[remote-clusters-api-key]]
=== Add remote clusters using API key authentication

beta::[]

API key authentication enables a local cluster to authenticate itself with a
remote cluster via a <<security-api-create-cross-cluster-api-key,cross-cluster
API key>>. The API key needs to be created by an administrator of the remote
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,6 @@ you configure the remotes.
`cluster.remote.<cluster_alias>.credentials` (<<secure-settings,Secure>>, <<reloadable-secure-settings,Reloadable>>)::
[[remote-cluster-credentials-setting]]

beta:[]
Per cluster setting for configuring <<remote-clusters-api-key,remote clusters with the API Key based model>>.
This setting takes the encoded value of a
<<security-api-create-cross-cluster-api-key,cross-cluster API key>> and must be set
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/modules/remote-cluster-network.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
[[remote-cluster-network-settings]]
==== Advanced remote cluster (API key based model) settings

beta::[]

Use the following advanced settings to configure the remote cluster interface (API key based model)
independently of the <<transport-settings,transport interface>>. You can also
configure both interfaces together using the <<common-network-settings,network settings>>.
Expand Down
3 changes: 1 addition & 2 deletions docs/reference/modules/remote-clusters.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,7 @@ with either of the connection modes.
==== Security models

API key based security model::
beta:[]
For clusters on version 8.10 or later, you can use an API key to authenticate
For clusters on version 8.14 or later, you can use an API key to authenticate
and authorize cross-cluster operations to a remote cluster. This model offers
administrators of both the local and the remote cluster fine-grained access
controls. <<remote-clusters-api-key>>.
Expand Down
7 changes: 4 additions & 3 deletions docs/reference/release-notes/8.13.0.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,10 @@ Also see <<breaking-changes-8.13,Breaking changes in 8.13>>.
[float]
=== Known issues

* Cross-cluster searches involving nodes upgraded to 8.13.0 and a coordinator node that is running on
version 8.12 or earlier can produce duplicate buckets. This occurs when using date_histogram or histogram
aggregations (issue: {es-issue}108181[#108181]).
* Searches involving nodes upgraded to 8.13.0 and a coordinator node that is running on version
8.12 or earlier can produce duplicate buckets when running `date_histogram` or `histogram`
aggregations. This can happen during a rolling upgrade to 8.13 or while running cross-cluster
searches. (issue: {es-issue}108181[#108181]).

* Due to a bug in the bundled JDK 22 nodes might crash abruptly under high memory pressure.
We recommend <<jvm-version,downgrading to JDK 21.0.2>> asap to mitigate the issue.
Expand Down
7 changes: 4 additions & 3 deletions docs/reference/release-notes/8.13.1.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@ Also see <<breaking-changes-8.13,Breaking changes in 8.13>>.
[[bug-8.13.1]]
[float]

* Cross-cluster searches involving nodes upgraded to 8.13.1 and a coordinator node that is running on
version 8.12 or earlier can produce duplicate buckets. This occurs when using date_histogram or histogram
aggregations (issue: {es-issue}108181[#108181]).
* Searches involving nodes upgraded to 8.13.0 and a coordinator node that is running on version
8.12 or earlier can produce duplicate buckets when running `date_histogram` or `histogram`
aggregations. This can happen during a rolling upgrade to 8.13 or while running cross-cluster
searches. (issue: {es-issue}108181[#108181]).

=== Bug fixes

Expand Down
7 changes: 4 additions & 3 deletions docs/reference/release-notes/8.13.2.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@ Also see <<breaking-changes-8.13,Breaking changes in 8.13>>.
[[bug-8.13.2]]
[float]

* Cross-cluster searches involving nodes upgraded to 8.13.2 and a coordinator node that is running on
version 8.12 or earlier can produce duplicate buckets. This occurs when using date_histogram or histogram
aggregations (issue: {es-issue}108181[#108181]).
* Searches involving nodes upgraded to 8.13.0 and a coordinator node that is running on version
8.12 or earlier can produce duplicate buckets when running `date_histogram` or `histogram`
aggregations. This can happen during a rolling upgrade to 8.13 or while running cross-cluster
searches. (issue: {es-issue}108181[#108181]).

=== Bug fixes

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
[[security-api-create-cross-cluster-api-key]]
=== Create Cross-Cluster API key API

beta::[]

++++
<titleabbrev>Create Cross-Cluster API key</titleabbrev>
++++
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/rest-api/security/create-roles.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ that begin with `_` are reserved for system usage.
For more information, see
<<run-as-privilege>>.

`remote_indices`:: beta:[] (list) A list of remote indices permissions entries.
`remote_indices`:: (list) A list of remote indices permissions entries.
+
--
NOTE: Remote indices are effective for <<remote-clusters-api-key,remote clusters configured with the API key based model>>.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
[[security-api-update-cross-cluster-api-key]]
=== Update Cross-Cluster API key API

beta::[]

++++
<titleabbrev>Update Cross-Cluster API key</titleabbrev>
++++
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,7 @@ A role is defined by the following JSON structure:
<4> A list of indices permissions entries. This field is optional (missing `indices`
privileges effectively mean no index level permissions).
<5> A list of application privilege entries. This field is optional.
<6> beta:[]
A list of indices permissions entries for
<6> A list of indices permissions entries for
<<remote-clusters-api-key,remote clusters configured with the API key based model>>.
This field is optional (missing `remote_indices` privileges effectively mean
no index level permissions for any API key based remote clusters).
Expand Down Expand Up @@ -165,8 +164,6 @@ no effect, and will not grant any actions in the
[[roles-remote-indices-priv]]
==== Remote indices privileges

beta::[]

For <<remote-clusters-api-key,remote clusters configured with the API key based model>>, remote indices privileges
can be used to specify desired indices privileges for matching remote clusters. The final
effective index privileges will be an intersection of the remote indices privileges
Expand Down
Loading

0 comments on commit 5495006

Please sign in to comment.