From 5a43c9b2afdb1af3596764886301f5403df7e93d Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Wed, 5 Mar 2025 17:41:09 -0600 Subject: [PATCH] fix external links --- .../cloud-enterprise/ece-software-prereq.md | 2 +- .../advanced-elasticsearch-node-scheduling.md | 2 +- .../cloud-on-k8s/create-custom-images.md | 6 +-- .../cloud-on-k8s/deploy-eck-on-openshift.md | 2 +- .../k8s-openshift-deploy-elasticsearch.md | 2 +- .../k8s-openshift-deploy-operator.md | 2 +- .../cloud-on-k8s/nodes-orchestration.md | 2 +- deploy-manage/deploy/self-managed.md | 2 +- .../self-managed/executable-jna-tmpdir.md | 2 +- .../important-settings-configuration.md | 2 +- ...stall-elasticsearch-with-debian-package.md | 2 +- .../install-elasticsearch-with-docker.md | 2 +- .../install-elasticsearch-with-rpm.md | 2 +- .../maintenance/ece/pause-instance.md | 2 +- .../start-stop-elasticsearch.md | 4 +- .../elasticsearch-deprecation-logs.md | 26 +++++----- ...search-log4j-configuration-self-managed.md | 26 +++++----- .../logging-configuration/kibana-logging.md | 2 +- .../update-elasticsearch-logging-levels.md | 26 +++++----- .../approximate-knn-search.md | 2 +- .../optimize-performance/search-speed.md | 2 +- ...g-cipher-suites-for-stronger-encryption.md | 2 +- ...nt-with-customer-managed-encryption-key.md | 22 ++++---- .../snapshot-and-restore/azure-repository.md | 4 +- .../snapshot-and-restore/cloud-on-k8s.md | 2 +- .../ec-aws-custom-repository.md | 4 +- .../ece-aws-custom-repository.md | 2 +- .../minio-on-premise-repository.md | 2 +- .../snapshot-and-restore/s3-repository.md | 50 +++++++++---------- .../prepare-to-upgrade/index-compatibility.md | 2 +- .../_snippets/org-vs-deploy-sso.md | 4 +- .../cluster-or-deployment-auth/kerberos.md | 2 +- .../openid-connect.md | 4 +- 33 files changed, 110 insertions(+), 110 deletions(-) diff --git a/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md b/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md index 3ac08bad9f..b2b59652e2 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md @@ -22,7 +22,7 @@ We recommend using kernel 4.15.x or later on Ubuntu. To check your kernel version, run `uname -r`. ::::{note} -Elastic Cloud Enterprise is not supported on Linux distributions that use [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.md) version 2. +Elastic Cloud Enterprise is not supported on Linux distributions that use [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html) version 2. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md index 8e0e8d2c87..f53377b5ca 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md @@ -22,7 +22,7 @@ You can combine these features to deploy a production-grade Elasticsearch cluste You can configure Elasticsearch nodes with [one or multiple roles](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/node-settings.md). ::::{tip} -You can use [YAML anchors](https://yaml.org/spec/1.2/spec.md#id2765878) to declare the configuration change once and reuse it across all the node sets. +You can use [YAML anchors](https://yaml.org/spec/1.2/spec.html#id2765878) to declare the configuration change once and reuse it across all the node sets. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md b/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md index 8067831804..59e1a7ebea 100644 --- a/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md +++ b/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md @@ -44,7 +44,7 @@ Providing the correct version is always required as ECK reasons about APIs and c :::: -The steps are similar for [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr) and [AWS Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.md#use-ecr). +The steps are similar for [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr) and [AWS Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html#use-ecr). If your custom images follow the naming convention adopted by the official images, and you only want to use your custom images, you can also simply [override the container registry](air-gapped-install.md#k8s-container-registry-override). @@ -53,6 +53,6 @@ For more information, check the following references: * [Elasticsearch documentation on Using custom Docker images](/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md#_c_customized_image) * [Google Container Registry](https://cloud.google.com/container-registry/docs/how-to) * [Azure Container Registry](https://docs.microsoft.com/en-us/azure/container-registry/) -* [Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.md) -* [OpenShift Container Platform registry](https://docs.openshift.com/container-platform/4.12/registry/index.md) +* [Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) +* [OpenShift Container Platform registry](https://docs.openshift.com/container-platform/4.12/registry/index.html) diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md index 59c9736aeb..887cb60276 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md @@ -23,7 +23,7 @@ This section shows how to run ECK on OpenShift. 1. To run the instructions on this page, you must be a `system:admin` user or a user with the privileges to create Projects, CRDs, and RBAC resources at the cluster level. 2. Set virtual memory settings on the Kubernetes nodes. - Before deploying an Elasticsearch cluster with ECK, make sure that the Kubernetes nodes in your cluster have the correct `vm.max_map_count` sysctl setting applied. By default, Pods created by ECK are likely to run with the `restricted` [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.md) (SCC) which restricts privileged access required to change this setting in the underlying Kubernetes nodes. + Before deploying an Elasticsearch cluster with ECK, make sure that the Kubernetes nodes in your cluster have the correct `vm.max_map_count` sysctl setting applied. By default, Pods created by ECK are likely to run with the `restricted` [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) which restricts privileged access required to change this setting in the underlying Kubernetes nodes. Alternatively, you can opt for setting `node.store.allow_mmap: false` at the [Elasticsearch node configuration](node-configuration.md) level. This has performance implications and is not recommended for production workloads. diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md index 7b2be08264..853be06ca6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md @@ -11,7 +11,7 @@ mapped_pages: Use the following code to create an Elasticsearch cluster `elasticsearch-sample` and a "passthrough" route to access it: ::::{note} -A namespace other than the default namespaces (default, kube-system, kube-**, openshift-**, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.md) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. +A namespace other than the default namespaces (default, kube-system, kube-**, openshift-**, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md index 90ce478b6c..70482bb90e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md @@ -25,7 +25,7 @@ This page shows the installation steps to deploy ECK in Openshift: 3. Create a namespace to hold the Elastic resources (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash): ::::{note} - A namespace other than the default namespaces (default, kube-\*, openshift-\*, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.md) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. + A namespace other than the default namespaces (default, kube-\*, openshift-\*, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. :::: ```sh diff --git a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md index 11384843e2..ac99ae6c59 100644 --- a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md +++ b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md @@ -21,7 +21,7 @@ This section covers the following topics: NodeSets are used to specify the topology of the Elasticsearch cluster. Each NodeSet represents a group of Elasticsearch nodes that share the same Elasticsearch configuration and Kubernetes Pod configuration. ::::{tip} -You can use [YAML anchors](https://yaml.org/spec/1.2/spec.md#id2765878) to declare the configuration change once and reuse it across all the node sets. +You can use [YAML anchors](https://yaml.org/spec/1.2/spec.html#id2765878) to declare the configuration change once and reuse it across all the node sets. :::: diff --git a/deploy-manage/deploy/self-managed.md b/deploy-manage/deploy/self-managed.md index 8f69e5ac21..04391772ac 100644 --- a/deploy-manage/deploy/self-managed.md +++ b/deploy-manage/deploy/self-managed.md @@ -5,4 +5,4 @@ mapped_pages: # Self-managed cluster [dependencies-versions] -See [Elastic Stack Third-party Dependencices](https://artifacts.elastic.co/reports/dependencies/dependencies-current.md) for the complete list of dependencies for {{es}}. \ No newline at end of file +See [Elastic Stack Third-party Dependencices](https://artifacts.elastic.co/reports/dependencies/dependencies-current.html) for the complete list of dependencies for {{es}}. \ No newline at end of file diff --git a/deploy-manage/deploy/self-managed/executable-jna-tmpdir.md b/deploy-manage/deploy/self-managed/executable-jna-tmpdir.md index 40847b987b..c3d9cef3e8 100644 --- a/deploy-manage/deploy/self-managed/executable-jna-tmpdir.md +++ b/deploy-manage/deploy/self-managed/executable-jna-tmpdir.md @@ -33,7 +33,7 @@ To resolve these problems, either remove the `noexec` option from your `/tmp` fi If you need finer control over the location of these temporary files, you can also configure the path that JNA uses with the [JVM flag](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/jvm-settings.md#set-jvm-options) `-Djna.tmpdir=` and you can configure the path that `libffi` uses for its temporary files by setting the `LIBFFI_TMPDIR` environment variable. Future versions of {{es}} may need additional configuration, so you should prefer to set `ES_TMPDIR` wherever possible. ::::{note} -{{es}} does not remove its temporary directory. You should remove leftover temporary directories while {{es}} is not running. It is best to do this automatically, for instance on each reboot. If you are running on Linux, you can achieve this by using the [tmpfs](https://www.kernel.org/doc/html/latest/filesystems/tmpfs.md) file system. +{{es}} does not remove its temporary directory. You should remove leftover temporary directories while {{es}} is not running. It is best to do this automatically, for instance on each reboot. If you are running on Linux, you can achieve this by using the [tmpfs](https://www.kernel.org/doc/html/latest/filesystems/tmpfs.html) file system. :::: diff --git a/deploy-manage/deploy/self-managed/important-settings-configuration.md b/deploy-manage/deploy/self-managed/important-settings-configuration.md index 398a0166a6..8b6b288908 100644 --- a/deploy-manage/deploy/self-managed/important-settings-configuration.md +++ b/deploy-manage/deploy/self-managed/important-settings-configuration.md @@ -184,7 +184,7 @@ By default, {{es}} enables garbage collection (GC) logs. These are configured in You can reconfigure JVM logging using the command line options described in [JEP 158: Unified JVM Logging](https://openjdk.java.net/jeps/158). Unless you change the default `jvm.options` file directly, the {{es}} default configuration is applied in addition to your own settings. To disable the default configuration, first disable logging by supplying the `-Xlog:disable` option, then supply your own command line options. This disables *all* JVM logging, so be sure to review the available options and enable everything that you require. -To see further options not contained in the original JEP, see [Enable Logging with the JVM Unified Logging Framework](https://docs.oracle.com/en/java/javase/13/docs/specs/man/java.md#enable-logging-with-the-jvm-unified-logging-framework). +To see further options not contained in the original JEP, see [Enable Logging with the JVM Unified Logging Framework](https://docs.oracle.com/en/java/javase/13/docs/specs/man/java.html#enable-logging-with-the-jvm-unified-logging-framework). ### Examples [_examples] diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md index dab8ea2e55..f91553579f 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md @@ -173,7 +173,7 @@ To list journal entries for the elasticsearch service starting from a given time sudo journalctl --unit elasticsearch --since "2016-10-30 18:17:16" ``` -Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.md) for more command line options. +Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.html) for more command line options. ::::{admonition} Startup timeouts with older `systemd` versions :class: tip diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md index 4d53003d00..f3ca14670c 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-docker.md @@ -397,7 +397,7 @@ vm.max_map_count = 262144 By default, {{es}} runs inside the container as user `elasticsearch` using uid:gid `1000:0`. ::::{important} -One exception is [Openshift](https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.md#openshift-specific-guidelines), which runs containers using an arbitrarily assigned user ID. Openshift presents persistent volumes with the gid set to `0`, which works without any adjustments. +One exception is [Openshift](https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines), which runs containers using an arbitrarily assigned user ID. Openshift presents persistent volumes with the gid set to `0`, which works without any adjustments. :::: diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md index 47010a4254..9ebea21cf7 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md @@ -177,7 +177,7 @@ To list journal entries for the elasticsearch service starting from a given time sudo journalctl --unit elasticsearch --since "2016-10-30 18:17:16" ``` -Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.md) for more command line options. +Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.html) for more command line options. ::::{admonition} Startup timeouts with older `systemd` versions :class: tip diff --git a/deploy-manage/maintenance/ece/pause-instance.md b/deploy-manage/maintenance/ece/pause-instance.md index 3f51c58d80..b84ef6ac67 100644 --- a/deploy-manage/maintenance/ece/pause-instance.md +++ b/deploy-manage/maintenance/ece/pause-instance.md @@ -10,7 +10,7 @@ applies_to: If an individual instance is experiencing issues, then you can stop it by selecting **Pause instance** from its menu. -Pausing an instance immediately suspends the container without completing existing requests by running either [Docker `stop`](https://docs.docker.com/reference/cli/docker/container/stop/) or [Podman `stop`](https://docs.podman.io/en/stable/markdown/podman-stop.1.md), as applicable. The instance will then be marked as **Paused**. +Pausing an instance immediately suspends the container without completing existing requests by running either [Docker `stop`](https://docs.docker.com/reference/cli/docker/container/stop/) or [Podman `stop`](https://docs.podman.io/en/stable/markdown/podman-stop.1.html), as applicable. The instance will then be marked as **Paused**. You can start an instance by selecting **Resume instance** from the menu. diff --git a/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md b/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md index 27cd822ad2..57d09abb4f 100644 --- a/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md +++ b/deploy-manage/maintenance/start-stop-services/start-stop-elasticsearch.md @@ -156,7 +156,7 @@ To list journal entries for the elasticsearch service starting from a given time sudo journalctl --unit elasticsearch --since "2016-10-30 18:17:16" ``` -Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.md) for more command line options. +Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.html) for more command line options. ::::{admonition} Startup timeouts with older `systemd` versions :class: tip @@ -240,7 +240,7 @@ To list journal entries for the elasticsearch service starting from a given time sudo journalctl --unit elasticsearch --since "2016-10-30 18:17:16" ``` -Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.md) for more command line options. +Check `man journalctl` or [https://www.freedesktop.org/software/systemd/man/journalctl.html](https://www.freedesktop.org/software/systemd/man/journalctl.html) for more command line options. ::::{admonition} Startup timeouts with older `systemd` versions :class: tip diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md index 934d23e214..d1d86b254e 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md @@ -49,9 +49,9 @@ Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). -## Logging configuration [logging-configuration] +## Logging configuration [logging-configuration] -::::{important} +::::{important} Elastic strongly recommends using the Log4j 2 configuration that is shipped by default. :::: @@ -115,7 +115,7 @@ appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator} 1. The configuration for `old style` pattern appenders. These logs will be saved in `*.log` files and if archived will be in `* .log.gz` files. Note that these should be considered deprecated and will be removed in the future. -::::{note} +::::{note} Log4j’s configuration parsing gets confused by any extraneous whitespace; if you copy and paste any Log4j settings on this page, or enter any Log4j configuration in general, be sure to trim any leading and trailing whitespace. :::: @@ -143,10 +143,10 @@ appender.rolling.strategy.action.condition.nested_condition.age = 7D <7> 7. Retain logs for seven days -Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.md). +Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). -## Configuring logging levels [configuring-logging-levels] +## Configuring logging levels [configuring-logging-levels] Log4J 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): @@ -205,18 +205,18 @@ Other ways to change log levels include: This is most appropriate when you already need to change your Log4j 2 configuration for other reasons. For example, you may want to send logs for a particular logger to another file. However, these use cases are rare. -::::{important} +::::{important} {{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} may report information in these logs in different ways, perhaps adding extra detail, removing unnecessary information, formatting the same information in different ways, renaming the logger or adjusting the log level for specific messages. Do not rely on the contents of the application logs remaining precisely the same between versions. :::: -::::{note} +::::{note} To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems which do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. :::: -## Deprecation logging [deprecation-logging] +## Deprecation logging [deprecation-logging] {{es}} also writes deprecation logs to the log directory. These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. @@ -263,12 +263,12 @@ You can identify what is triggering deprecated functionality if `X-Opaque-Id` wa Deprecation logs can be indexed into `.logs-deprecation.elasticsearch-default` data stream `cluster.deprecation_indexing.enabled` setting is set to true. -### Deprecation logs throttling [_deprecation_logs_throttling] +### Deprecation logs throttling [_deprecation_logs_throttling] -Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/9.0.0-beta1-SNAPSHOT/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.md) for more details. +Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. -## JSON log format [json-logging] +## JSON log format [json-logging] To make parsing Elasticsearch logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. @@ -277,9 +277,9 @@ appender.rolling.layout.type = ECSJsonLayout appender.rolling.layout.dataset = elasticsearch.server ``` -Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/9.0.0-beta1-SNAPSHOT/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.md) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. +Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. -::::{note} +::::{note} You can still use your own custom layout. To do that replace the line `appender.rolling.layout.type` with a different layout. See sample below: :::: diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md b/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md index 65817eec2e..4277d3ca26 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md @@ -46,9 +46,9 @@ Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). -## Logging configuration [logging-configuration] +## Logging configuration [logging-configuration] -::::{important} +::::{important} Elastic strongly recommends using the Log4j 2 configuration that is shipped by default. :::: @@ -112,7 +112,7 @@ appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator} 1. The configuration for `old style` pattern appenders. These logs will be saved in `*.log` files and if archived will be in `* .log.gz` files. Note that these should be considered deprecated and will be removed in the future. -::::{note} +::::{note} Log4j’s configuration parsing gets confused by any extraneous whitespace; if you copy and paste any Log4j settings on this page, or enter any Log4j configuration in general, be sure to trim any leading and trailing whitespace. :::: @@ -140,10 +140,10 @@ appender.rolling.strategy.action.condition.nested_condition.age = 7D <7> 7. Retain logs for seven days -Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.md). +Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). -## Configuring logging levels [configuring-logging-levels] +## Configuring logging levels [configuring-logging-levels] Log4J 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): @@ -202,18 +202,18 @@ Other ways to change log levels include: This is most appropriate when you already need to change your Log4j 2 configuration for other reasons. For example, you may want to send logs for a particular logger to another file. However, these use cases are rare. -::::{important} +::::{important} {{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} may report information in these logs in different ways, perhaps adding extra detail, removing unnecessary information, formatting the same information in different ways, renaming the logger or adjusting the log level for specific messages. Do not rely on the contents of the application logs remaining precisely the same between versions. :::: -::::{note} +::::{note} To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems which do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. :::: -## Deprecation logging [deprecation-logging] +## Deprecation logging [deprecation-logging] {{es}} also writes deprecation logs to the log directory. These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. @@ -260,12 +260,12 @@ You can identify what is triggering deprecated functionality if `X-Opaque-Id` wa Deprecation logs can be indexed into `.logs-deprecation.elasticsearch-default` data stream `cluster.deprecation_indexing.enabled` setting is set to true. -### Deprecation logs throttling [_deprecation_logs_throttling] +### Deprecation logs throttling [_deprecation_logs_throttling] -Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/9.0.0-beta1-SNAPSHOT/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.md) for more details. +Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. -## JSON log format [json-logging] +## JSON log format [json-logging] To make parsing Elasticsearch logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. @@ -274,9 +274,9 @@ appender.rolling.layout.type = ECSJsonLayout appender.rolling.layout.dataset = elasticsearch.server ``` -Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/9.0.0-beta1-SNAPSHOT/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.md) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. +Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. -::::{note} +::::{note} You can still use your own custom layout. To do that replace the line `appender.rolling.layout.type` with a different layout. See sample below: :::: diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging.md b/deploy-manage/monitor/logging-configuration/kibana-logging.md index c7708a20ef..e0dd1e2649 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-logging.md +++ b/deploy-manage/monitor/logging-configuration/kibana-logging.md @@ -53,7 +53,7 @@ There are two types of layout supported at the moment: [`pattern`](#pattern-layo With `pattern` layout it’s possible to define a string pattern with special placeholders `%conversion_pattern` that will be replaced with data from the actual log message. By default the following pattern is used: `[%date][%level][%logger] %message`. ::::{note} -The `pattern` layout uses a sub-set of [log4j 2 pattern syntax](https://logging.apache.org/log4j/2.x/manual/layouts.md#PatternLayout) and **doesn’t implement** all `log4j 2` capabilities. +The `pattern` layout uses a sub-set of [log4j 2 pattern syntax](https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout) and **doesn’t implement** all `log4j 2` capabilities. :::: diff --git a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md index f99f769ed1..63bd147452 100644 --- a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md +++ b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md @@ -49,9 +49,9 @@ Files in `%ES_HOME%` risk deletion during an upgrade. In production, we strongly If you run {{es}} from the command line, {{es}} prints logs to the standard output (`stdout`). -## Logging configuration [logging-configuration] +## Logging configuration [logging-configuration] -::::{important} +::::{important} Elastic strongly recommends using the Log4j 2 configuration that is shipped by default. :::: @@ -115,7 +115,7 @@ appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator} 1. The configuration for `old style` pattern appenders. These logs will be saved in `*.log` files and if archived will be in `* .log.gz` files. Note that these should be considered deprecated and will be removed in the future. -::::{note} +::::{note} Log4j’s configuration parsing gets confused by any extraneous whitespace; if you copy and paste any Log4j settings on this page, or enter any Log4j configuration in general, be sure to trim any leading and trailing whitespace. :::: @@ -143,10 +143,10 @@ appender.rolling.strategy.action.condition.nested_condition.age = 7D <7> 7. Retain logs for seven days -Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.md). +Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional loggers. The logger section contains the java packages and their corresponding log level. The appender section contains the destinations for the logs. Extensive information on how to customize logging and all the supported appenders can be found on the [Log4j documentation](https://logging.apache.org/log4j/2.x/manual/configuration.html). -## Configuring logging levels [configuring-logging-levels] +## Configuring logging levels [configuring-logging-levels] Log4J 2 log messages include a *level* field, which is one of the following (in order of increasing verbosity): @@ -205,18 +205,18 @@ Other ways to change log levels include: This is most appropriate when you already need to change your Log4j 2 configuration for other reasons. For example, you may want to send logs for a particular logger to another file. However, these use cases are rare. -::::{important} +::::{important} {{es}}'s application logs are intended for humans to read and interpret. Different versions of {{es}} may report information in these logs in different ways, perhaps adding extra detail, removing unnecessary information, formatting the same information in different ways, renaming the logger or adjusting the log level for specific messages. Do not rely on the contents of the application logs remaining precisely the same between versions. :::: -::::{note} +::::{note} To prevent leaking sensitive information in logs, {{es}} suppresses certain log messages by default even at the highest verbosity levels. To disable this protection on a node, set the Java system property `es.insecure_network_trace_enabled` to `true`. This feature is primarily intended for test systems which do not contain any sensitive information. If you set this property on a system which contains sensitive information, you must protect your logs from unauthorized access. :::: -## Deprecation logging [deprecation-logging] +## Deprecation logging [deprecation-logging] {{es}} also writes deprecation logs to the log directory. These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. @@ -263,12 +263,12 @@ You can identify what is triggering deprecated functionality if `X-Opaque-Id` wa Deprecation logs can be indexed into `.logs-deprecation.elasticsearch-default` data stream `cluster.deprecation_indexing.enabled` setting is set to true. -### Deprecation logs throttling [_deprecation_logs_throttling] +### Deprecation logs throttling [_deprecation_logs_throttling] -Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/9.0.0-beta1-SNAPSHOT/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.md) for more details. +Deprecation logs are deduplicated based on a deprecated feature key and x-opaque-id so that if a feature is repeatedly used, it will not overload the deprecation logs. This applies to both indexed deprecation logs and logs emitted to log files. You can disable the use of `x-opaque-id` in throttling by changing `cluster.deprecation_indexing.x_opaque_id_used.enabled` to false, refer to this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/RateLimitingFilter.html) for more details. -## JSON log format [json-logging] +## JSON log format [json-logging] To make parsing Elasticsearch logs easier, logs are now printed in a JSON format. This is configured by a Log4J layout property `appender.rolling.layout.type = ECSJsonLayout`. This layout requires a `dataset` attribute to be set which is used to distinguish logs streams when parsing. @@ -277,9 +277,9 @@ appender.rolling.layout.type = ECSJsonLayout appender.rolling.layout.dataset = elasticsearch.server ``` -Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://snapshots.elastic.co/javadoc/org/elasticsearch/elasticsearch/9.0.0-beta1-SNAPSHOT/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.md) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. +Each line contains a single JSON document with the properties configured in `ECSJsonLayout`. See this class [javadoc](https://artifacts.elastic.co/javadoc/org/elasticsearch/elasticsearch/8.17.3/org.elasticsearch.server/org/elasticsearch/common/logging/ESJsonLayout.html) for more details. However if a JSON document contains an exception, it will be printed over multiple lines. The first line will contain regular properties and subsequent lines will contain the stacktrace formatted as a JSON array. -::::{note} +::::{note} You can still use your own custom layout. To do that replace the line `appender.rolling.layout.type` with a different layout. See sample below: :::: diff --git a/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md b/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md index 3c493cb70f..858e36783e 100644 --- a/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md +++ b/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md @@ -107,7 +107,7 @@ Search can cause a lot of randomized read I/O. When the underlying block device Most Linux distributions use a sensible readahead value of `128KiB` for a single plain device, however, when using software raid, LVM or dm-crypt the resulting block device (backing Elasticsearch [path.data](../../deploy/self-managed/important-settings-configuration.md#path-settings)) may end up having a very large readahead value (in the range of several MiB). This usually results in severe page (filesystem) cache thrashing adversely affecting search (or [update](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)) performance. -You can check the current value in `KiB` using `lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE`. Consult the documentation of your distribution on how to alter this value (for example with a `udev` rule to persist across reboots, or via [blockdev --setra](https://man7.org/linux/man-pages/man8/blockdev.8.md) as a transient setting). We recommend a value of `128KiB` for readahead. +You can check the current value in `KiB` using `lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE`. Consult the documentation of your distribution on how to alter this value (for example with a `udev` rule to persist across reboots, or via [blockdev --setra](https://man7.org/linux/man-pages/man8/blockdev.8.html) as a transient setting). We recommend a value of `128KiB` for readahead. ::::{warning} `blockdev` expects values in 512 byte sectors whereas `lsblk` reports values in `KiB`. As an example, to temporarily set readahead to `128KiB` for `/dev/nvme0n1`, specify `blockdev --setra 256 /dev/nvme0n1`. diff --git a/deploy-manage/production-guidance/optimize-performance/search-speed.md b/deploy-manage/production-guidance/optimize-performance/search-speed.md index a2eb961719..a732bed112 100644 --- a/deploy-manage/production-guidance/optimize-performance/search-speed.md +++ b/deploy-manage/production-guidance/optimize-performance/search-speed.md @@ -17,7 +17,7 @@ Search can cause a lot of randomized read I/O. When the underlying block device Most Linux distributions use a sensible readahead value of `128KiB` for a single plain device, however, when using software raid, LVM or dm-crypt the resulting block device (backing Elasticsearch [path.data](../../deploy/self-managed/important-settings-configuration.md#path-settings)) may end up having a very large readahead value (in the range of several MiB). This usually results in severe page (filesystem) cache thrashing adversely affecting search (or [update](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)) performance. -You can check the current value in `KiB` using `lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE`. Consult the documentation of your distribution on how to alter this value (for example with a `udev` rule to persist across reboots, or via [blockdev --setra](https://man7.org/linux/man-pages/man8/blockdev.8.md) as a transient setting). We recommend a value of `128KiB` for readahead. +You can check the current value in `KiB` using `lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE`. Consult the documentation of your distribution on how to alter this value (for example with a `udev` rule to persist across reboots, or via [blockdev --setra](https://man7.org/linux/man-pages/man8/blockdev.8.html) as a transient setting). We recommend a value of `128KiB` for readahead. ::::{warning} `blockdev` expects values in 512 byte sectors whereas `lsblk` reports values in `KiB`. As an example, to temporarily set readahead to `128KiB` for `/dev/nvme0n1`, specify `blockdev --setra 256 /dev/nvme0n1`. diff --git a/deploy-manage/security/enabling-cipher-suites-for-stronger-encryption.md b/deploy-manage/security/enabling-cipher-suites-for-stronger-encryption.md index 67232ef966..25ee8bd123 100644 --- a/deploy-manage/security/enabling-cipher-suites-for-stronger-encryption.md +++ b/deploy-manage/security/enabling-cipher-suites-for-stronger-encryption.md @@ -7,7 +7,7 @@ mapped_pages: The TLS and SSL protocols use a cipher suite that determines the strength of encryption used to protect the data. You may want to increase the strength of encryption used when using a Oracle JVM; the IcedTea OpenJDK ships without these restrictions in place. This step is not required to successfully use encrypted communication. -The *Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files* enable the use of additional cipher suites for Java in a separate JAR file that you need to add to your Java installation. You can download this JAR file from Oracle’s [download page](http://www.oracle.com/technetwork/java/javase/downloads/index.md). The *JCE Unlimited Strength Jurisdiction Policy Files`* are required for encryption with key lengths greater than 128 bits, such as 256-bit AES encryption. +The *Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files* enable the use of additional cipher suites for Java in a separate JAR file that you need to add to your Java installation. You can download this JAR file from Oracle’s [download page](http://www.oracle.com/technetwork/java/javase/downloads/index.html). The *JCE Unlimited Strength Jurisdiction Policy Files`* are required for encryption with key lengths greater than 128 bits, such as 256-bit AES encryption. After installation, all cipher suites in the JCE are available for use but requires configuration in order to use them. To enable the use of stronger cipher suites with {{es}} {{security-features}}, configure the [`cipher_suites` parameter](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings). diff --git a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md index b7030b9beb..d1336ab9e1 100644 --- a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md +++ b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md @@ -34,7 +34,7 @@ When a deployment encrypted with a customer-managed key is deleted or terminated :::::::{tab-set} ::::::{tab-item} AWS -* Have permissions on AWS KMS to [create a symmetric AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.md#symmetric-cmks) and to configure AWS IAM roles. +* Have permissions on AWS KMS to [create a symmetric AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#symmetric-cmks) and to configure AWS IAM roles. * Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. :::::: @@ -67,7 +67,7 @@ At this time, the following features are not supported: * Disabling encryption on a deployment * AWS: - * Encrypting deployments using keys from [key stores external to AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.md) + * Encrypting deployments using keys from [key stores external to AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html) * Azure: @@ -85,7 +85,7 @@ At this time, the following features are not supported: :::::::{tab-set} ::::::{tab-item} AWS -1. Create a symmetric [single-region key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.md) or [multi-region replica key](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-replicate.md). The key must be available in each region in which you have deployments to encrypt. You can use the same key to encrypt multiple deployments. Later, you will need to provide the Amazon Resource Name (ARN) of that key or key alias to Elastic Cloud. +1. Create a symmetric [single-region key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) or [multi-region replica key](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-replicate.html). The key must be available in each region in which you have deployments to encrypt. You can use the same key to encrypt multiple deployments. Later, you will need to provide the Amazon Resource Name (ARN) of that key or key alias to Elastic Cloud. ::::{note} Use an alias ARN instead of the key ARN itself if you plan on doing manual key rotations. When using a key ARN directly, only automatic rotations are supported. @@ -116,12 +116,12 @@ At this time, the following features are not supported: } ``` - 1. [kms:Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.md) - This operation is used to decrypt data encryption keys stored on the deployment’s host, as well as decrypting snapshots stored in S3. - 2. [kms:Encrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.md) - This operation is used to encrypt the data encryption keys generated by the KMS as well as encrypting your snapshots. - 3. [kms:GetKeyRotationStatus](https://docs.aws.amazon.com/kms/latest/APIReference/API_GetKeyRotationStatus.md) - This operation is used to determine whether automatic key rotation is enabled. - 4. [kms:GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.md) - This operation is used to generate a data encryption key along with an encrypted version of it. The system leverages the randomness provided by the KMS to produce the data encryption key and your actual customer-managed key to encrypt the data encryption key. - 5. [kms:DescribeKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.md) - This operation is used to check whether your key is properly configured for Elastic Cloud. In addition, Elastic Cloud uses this to check if a manual key rotation was performed by comparing underlying key IDs associated with an alias. - 6. This condition allows the accounts associated with the Elastic Cloud production infrastructure to access your key. Under typical circumstances, Elastic Cloud will only be accessing your key via two AWS accounts: the account your deployment’s host is in and the account your S3 bucket containing snapshots is in. However, determining these particular account IDs prior to the deployment creation is not possible at the moment. This encompasses all of the possibilities. For more on this, check the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.md#condition-keys-principalorgpaths). + 1. [kms:Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) - This operation is used to decrypt data encryption keys stored on the deployment’s host, as well as decrypting snapshots stored in S3. + 2. [kms:Encrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html) - This operation is used to encrypt the data encryption keys generated by the KMS as well as encrypting your snapshots. + 3. [kms:GetKeyRotationStatus](https://docs.aws.amazon.com/kms/latest/APIReference/API_GetKeyRotationStatus.html) - This operation is used to determine whether automatic key rotation is enabled. + 4. [kms:GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html) - This operation is used to generate a data encryption key along with an encrypted version of it. The system leverages the randomness provided by the KMS to produce the data encryption key and your actual customer-managed key to encrypt the data encryption key. + 5. [kms:DescribeKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.html) - This operation is used to check whether your key is properly configured for Elastic Cloud. In addition, Elastic Cloud uses this to check if a manual key rotation was performed by comparing underlying key IDs associated with an alias. + 6. This condition allows the accounts associated with the Elastic Cloud production infrastructure to access your key. Under typical circumstances, Elastic Cloud will only be accessing your key via two AWS accounts: the account your deployment’s host is in and the account your S3 bucket containing snapshots is in. However, determining these particular account IDs prior to the deployment creation is not possible at the moment. This encompasses all of the possibilities. For more on this, check the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgpaths). :::::: ::::::{tab-item} Azure @@ -171,7 +171,7 @@ Provide your key identifier without the key version identifier so Elastic Cloud * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. - * Get the ARN of the symmetric AWS KMS key or of its alias. Use an alias if you are planning to do manual key rotations as specified in the [AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.md). + * Get the ARN of the symmetric AWS KMS key or of its alias. Use an alias if you are planning to do manual key rotations as specified in the [AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html). * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```bash @@ -364,7 +364,7 @@ You can check that your hosted deployment is correctly encrypted with the key yo ::::::{tab-item} AWS Elastic Cloud will automatically rotate the keys every 31 days as a security best practice. -You can also trigger a manual rotation [in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.md), which will take effect in Elastic Cloud within 30 minutes. **For manual rotations to work, you must use an alias when creating the deployment. We do not currently support [on-demand rotations](https://docs.aws.amazon.com/kms/latest/APIReference/API_RotateKeyOnDemand.md) but plan on supporting this in the future.** +You can also trigger a manual rotation [in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html), which will take effect in Elastic Cloud within 30 minutes. **For manual rotations to work, you must use an alias when creating the deployment. We do not currently support [on-demand rotations](https://docs.aws.amazon.com/kms/latest/APIReference/API_RotateKeyOnDemand.html) but plan on supporting this in the future.** :::::: ::::::{tab-item} Azure diff --git a/deploy-manage/tools/snapshot-and-restore/azure-repository.md b/deploy-manage/tools/snapshot-and-restore/azure-repository.md index 002dd76cc8..6dd3fe6d12 100644 --- a/deploy-manage/tools/snapshot-and-restore/azure-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/azure-repository.md @@ -103,7 +103,7 @@ The following list describes the available client settings. Those that must be s : A shared access signatures (SAS) token, which the repository’s internal Azure client uses for authentication. The SAS token must have read (r), write (w), list (l), and delete (d) permissions for the repository base path and all its contents. These permissions must be granted for the blob service (b) and apply to resource types service (s), container (c), and object (o). Alternatively, use `key`. `azure.client.CLIENT_NAME.timeout` -: The client side timeout for any single request to Azure, as a [time unit](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units). For example, a value of `5s` specifies a 5 second timeout. There is no default value, which means that {{es}} uses the [default value](https://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.md#setTimeoutIntervalInMs(java.lang.Integer)) set by the Azure client. +: The client side timeout for any single request to Azure, as a [time unit](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units). For example, a value of `5s` specifies a 5 second timeout. There is no default value, which means that {{es}} uses the [default value](https://azure.github.io/azure-storage-java/com/microsoft/azure/storage/RequestOptions.html#setTimeoutIntervalInMs(java.lang.Integer)) set by the Azure client. `azure.client.CLIENT_NAME.endpoint` : The Azure endpoint to connect to. It must include the protocol used to connect to Azure. @@ -120,7 +120,7 @@ If you specify neither the `key` nor the `sas_token` settings for a client then When running {{es}} on an [Azure Virtual Machine](https://azure.microsoft.com/en-gb/products/virtual-machines), you should use [Azure Managed Identity](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview) to provide credentials to {{es}}. To use Azure Managed Identity, assign a suitably authorized identity to the Azure Virtual Machine on which {{es}} is running. -When running {{es}} in [Azure Kubernetes Service](https://azure.microsoft.com/en-gb/products/kubernetes-service), for instance using [{{eck}}](cloud-on-k8s.md#k8s-azure-workload-identity), you should use [Azure Workload Identity](https://azure.github.io/azure-workload-identity/docs/introduction.md) to provide credentials to {{es}}. To use Azure Workload Identity, mount the `azure-identity-token` volume as a subdirectory of the [{{es}} config directory](../../deploy/self-managed/configure-elasticsearch.md#config-files-location) and set the `AZURE_FEDERATED_TOKEN_FILE` environment variable to point to a file called `azure-identity-token` within the mounted volume. +When running {{es}} in [Azure Kubernetes Service](https://azure.microsoft.com/en-gb/products/kubernetes-service), for instance using [{{eck}}](cloud-on-k8s.md#k8s-azure-workload-identity), you should use [Azure Workload Identity](https://azure.github.io/azure-workload-identity/docs/introduction.html) to provide credentials to {{es}}. To use Azure Workload Identity, mount the `azure-identity-token` volume as a subdirectory of the [{{es}} config directory](../../deploy/self-managed/configure-elasticsearch.md#config-files-location) and set the `AZURE_FEDERATED_TOKEN_FILE` environment variable to point to a file called `azure-identity-token` within the mounted volume. The Azure SDK has several other mechanisms to automatically obtain credentials from its environment, but the two methods described above are the only ones that are tested and supported for use in {{es}}. diff --git a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md index a41be11ee0..d1c11f2c3b 100644 --- a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md +++ b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md @@ -376,7 +376,7 @@ Follow the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/wor 1. Specify the Kubernetes secret created in the previous step to configure the Azure storage account name as a secure setting. 2. This is the service account created earlier in the steps from the [Azure Workload Identity](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster#create-a-kubernetes-service-account) tutorial. - 3. The corresponding volume is injected by the [Azure Workload Identity Mutating Admission Webhook](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.md). For Elasticsearch to be able to access the token, the mount needs to be in a sub-directory of the Elasticsearch config directory. The corresponding environment variable needs to be adjusted as well. + 3. The corresponding volume is injected by the [Azure Workload Identity Mutating Admission Webhook](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.html). For Elasticsearch to be able to access the token, the mount needs to be in a sub-directory of the Elasticsearch config directory. The corresponding environment variable needs to be adjusted as well. 11. Create a snapshot repository of type `azure` through the Elasticsearch API, or through [*Elastic Stack configuration policies*](../../deploy/cloud-on-k8s/elastic-stack-configuration-policies.md). diff --git a/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md b/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md index a1df801173..7ca9d2ab54 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md @@ -36,10 +36,10 @@ Next, create an IAM user, copy the access key ID and secret, and configure the f } ``` -1. The version of the policy language syntax rules. For more information, refer to the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.md#access-analyzer-reference-policy-checks-error-invalid-version). +1. The version of the policy language syntax rules. For more information, refer to the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html#access-analyzer-reference-policy-checks-error-invalid-version). -For more information on S3 and IAM, refer to AWS' [S3-documentation](http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.md) and [IAM-documentation](http://aws.amazon.com/documentation/iam/). +For more information on S3 and IAM, refer to AWS' [S3-documentation](http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html) and [IAM-documentation](http://aws.amazon.com/documentation/iam/). ::::{note} For a full list of settings that are supported for your S3 bucket, refer to [S3 repository](s3-repository.md) in the {{es}} Guide. diff --git a/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md b/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md index 1a38e3fe5b..3a84be9ed2 100644 --- a/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md @@ -17,7 +17,7 @@ To add a snapshot repository: 3. Select **Add Repository** to add an existing repository. 4. Provide a name for the repository configuration. - ECE Snapshot Repository names are now required to meet the same standards as S3 buckets. Refer to the official AWS documentation on [Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.md). + ECE Snapshot Repository names are now required to meet the same standards as S3 buckets. Refer to the official AWS documentation on [Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). 5. Select one of the supported repository types and specify the necessary settings: diff --git a/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md b/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md index 2d957c8aca..9d1011054f 100644 --- a/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md @@ -70,7 +70,7 @@ How you create the AWS S3 bucket depends on what version of Elasticsearch you ar * For versions 8.0 and later, {{es}} has built-in support for AWS S3 repositories; no repository plugin is needed. Use the Minio browser or an S3 client application to create an S3 bucket to store your snapshots. ::::{tip} -Don’t forget to make the bucket name DNS-friendly, for example no underscores or uppercase letters. For more details, read the [bucket restrictions](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.md). +Don’t forget to make the bucket name DNS-friendly, for example no underscores or uppercase letters. For more details, read the [bucket restrictions](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html). :::: diff --git a/deploy-manage/tools/snapshot-and-restore/s3-repository.md b/deploy-manage/tools/snapshot-and-restore/s3-repository.md index 8ec2d4938b..49bbc9f4e9 100644 --- a/deploy-manage/tools/snapshot-and-restore/s3-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/s3-repository.md @@ -1,16 +1,16 @@ ---- +--- mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/repository-s3.html applies_to: deployment: - self: + self: --- # S3 repository [repository-s3] You can use AWS S3 as a repository for [Snapshot/Restore](../snapshot-and-restore.md). -::::{note} +::::{note} If you are looking for a hosted solution of Elasticsearch on AWS, please visit [https://www.elastic.co/cloud/](https://www.elastic.co/cloud/). :::: @@ -18,7 +18,7 @@ See [this video](https://www.youtube.com/watch?v=ACqfyzWf-xs) for a walkthrough ## Getting started [repository-s3-usage] -To register an S3 repository, specify the type as `s3` when creating the repository. The repository defaults to using [ECS IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.md) credentials for authentication. You can also use [Kubernetes service accounts](#iam-kubernetes-service-accounts) for authentication. +To register an S3 repository, specify the type as `s3` when creating the repository. The repository defaults to using [ECS IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) credentials for authentication. You can also use [Kubernetes service accounts](#iam-kubernetes-service-accounts) for authentication. The only mandatory setting is the bucket name: @@ -87,7 +87,7 @@ The following list contains the available client settings. Those that must be st : An S3 session token. If set, the `access_key` and `secret_key` settings must also be specified. `endpoint` -: The S3 service endpoint to connect to. This defaults to `s3.amazonaws.com` but the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.md#s3_region) lists alternative S3 endpoints. If you are using an [S3-compatible service](#repository-s3-compatible-services) then you should set this to the service’s endpoint. +: The S3 service endpoint to connect to. This defaults to `s3.amazonaws.com` but the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) lists alternative S3 endpoints. If you are using an [S3-compatible service](#repository-s3-compatible-services) then you should set this to the service’s endpoint. `protocol` : The protocol to use to connect to S3. Valid values are either `http` or `https`. Defaults to `https`. When using HTTPS, this repository type validates the repository’s certificate chain using the JVM-wide truststore. Ensure that the root certificate authority is in this truststore using the JVM’s `keytool` tool. If you have a custom certificate authority for your S3 repository and you use the {{es}} [bundled JDK](../../deploy/self-managed/installing-elasticsearch.md#jvm-version), then you will need to reinstall your CA certificate every time you upgrade {{es}}. @@ -120,9 +120,9 @@ The following list contains the available client settings. Those that must be st : Whether retries should be throttled (i.e. should back off). Must be `true` or `false`. Defaults to `true`. `path_style_access` -: Whether to force the use of the path style access pattern. If `true`, the path style access pattern will be used. If `false`, the access pattern will be automatically determined by the AWS Java SDK (See [AWS documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.md#setPathStyleAccessEnabled-java.lang.Boolean-) for details). Defaults to `false`. +: Whether to force the use of the path style access pattern. If `true`, the path style access pattern will be used. If `false`, the access pattern will be automatically determined by the AWS Java SDK (See [AWS documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#setPathStyleAccessEnabled-java.lang.Boolean-) for details). Defaults to `false`. -::::{note} +::::{note} :name: repository-s3-path-style-deprecation In versions `7.0`, `7.1`, `7.2` and `7.3` all bucket operations used the [now-deprecated](https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) path style access pattern. If your deployment requires the path style access pattern then you should set this setting to `true` when upgrading. @@ -130,13 +130,13 @@ In versions `7.0`, `7.1`, `7.2` and `7.3` all bucket operations used the [now-de `disable_chunked_encoding` -: Whether chunked encoding should be disabled or not. If `false`, chunked encoding is enabled and will be used where appropriate. If `true`, chunked encoding is disabled and will not be used, which may mean that snapshot operations consume more resources and take longer to complete. It should only be set to `true` if you are using a storage service that does not support chunked encoding. See the [AWS Java SDK documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.md#disableChunkedEncoding--) for details. Defaults to `false`. +: Whether chunked encoding should be disabled or not. If `false`, chunked encoding is enabled and will be used where appropriate. If `true`, chunked encoding is disabled and will not be used, which may mean that snapshot operations consume more resources and take longer to complete. It should only be set to `true` if you are using a storage service that does not support chunked encoding. See the [AWS Java SDK documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#disableChunkedEncoding--) for details. Defaults to `false`. `region` -: Allows specifying the signing region to use. Specificing this setting manually should not be necessary for most use cases. Generally, the SDK will correctly guess the signing region to use. It should be considered an expert level setting to support S3-compatible APIs that require [v4 signatures](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.md) and use a region other than the default `us-east-1`. Defaults to empty string which means that the SDK will try to automatically determine the correct signing region. +: Allows specifying the signing region to use. Specificing this setting manually should not be necessary for most use cases. Generally, the SDK will correctly guess the signing region to use. It should be considered an expert level setting to support S3-compatible APIs that require [v4 signatures](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) and use a region other than the default `us-east-1`. Defaults to empty string which means that the SDK will try to automatically determine the correct signing region. `signer_override` -: Allows specifying the name of the signature algorithm to use for signing requests by the S3 client. Specifying this setting should not be necessary for most use cases. It should be considered an expert level setting to support S3-compatible APIs that do not support the signing algorithm that the SDK automatically determines for them. See the [AWS Java SDK documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.md#setSignerOverride-java.lang.String-) for details. Defaults to empty string which means that no signing algorithm override will be used. +: Allows specifying the name of the signature algorithm to use for signing requests by the S3 client. Specifying this setting should not be necessary for most use cases. It should be considered an expert level setting to support S3-compatible APIs that do not support the signing algorithm that the SDK automatically determines for them. See the [AWS Java SDK documentation](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html#setSignerOverride-java.lang.String-) for details. Defaults to empty string which means that no signing algorithm override will be used. ## Repository settings [repository-s3-repository] @@ -159,7 +159,7 @@ The following settings are supported: `bucket` : (Required) Name of the S3 bucket to use for snapshots. - The bucket name must adhere to Amazon’s [S3 bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.md#bucketnamingrules). + The bucket name must adhere to Amazon’s [S3 bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules). `client` @@ -168,13 +168,13 @@ The following settings are supported: `base_path` : Specifies the path to the repository data within its bucket. Defaults to an empty string, meaning that the repository is at the root of the bucket. The value of this setting should not start or end with a `/`. - ::::{note} + ::::{note} Don’t set `base_path` when configuring a snapshot repository for {{ECE}}. {{ECE}} automatically generates the `base_path` for each deployment so that multiple deployments may share the same bucket. :::: `chunk_size` -: ([byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The maximum size of object that {{es}} will write to the repository when creating a snapshot. Files which are larger than `chunk_size` will be chunked into several smaller objects. {{es}} may also split a file across multiple objects to satisfy other constraints such as the `max_multipart_parts` limit. Defaults to `5TB` which is the [maximum size of an object in AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.md). +: ([byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The maximum size of object that {{es}} will write to the repository when creating a snapshot. Files which are larger than `chunk_size` will be chunked into several smaller objects. {{es}} may also split a file across multiple objects to satisfy other constraints such as the `max_multipart_parts` limit. Defaults to `5TB` which is the [maximum size of an object in AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html). `compress` : When set to `true` metadata files are stored in compressed format. This setting doesn’t affect index files that are already compressed by default. Defaults to `true`. @@ -192,7 +192,7 @@ The following settings are supported: If `false`, the cluster can write to the repository and create snapshots in it. Defaults to `false`. - ::::{important} + ::::{important} If you register the same snapshot repository with multiple clusters, only one cluster should have write access to the repository. Having multiple clusters write to the repository at the same time risks corrupting the contents of the repository. :::: @@ -202,22 +202,22 @@ The following settings are supported: : When set to `true` files are encrypted on server side using AES256 algorithm. Defaults to `false`. `buffer_size` -: ([byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) Minimum threshold below which the chunk is uploaded using a single request. Beyond this threshold, the S3 repository will use the [AWS Multipart Upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.md) to split the chunk into several parts, each of `buffer_size` length, and to upload each part in its own request. Note that setting a buffer size lower than `5mb` is not allowed since it will prevent the use of the Multipart API and may result in upload errors. It is also not possible to set a buffer size greater than `5gb` as it is the maximum upload size allowed by S3. Defaults to `100mb` or `5%` of JVM heap, whichever is smaller. +: ([byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) Minimum threshold below which the chunk is uploaded using a single request. Beyond this threshold, the S3 repository will use the [AWS Multipart Upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) to split the chunk into several parts, each of `buffer_size` length, and to upload each part in its own request. Note that setting a buffer size lower than `5mb` is not allowed since it will prevent the use of the Multipart API and may result in upload errors. It is also not possible to set a buffer size greater than `5gb` as it is the maximum upload size allowed by S3. Defaults to `100mb` or `5%` of JVM heap, whichever is smaller. `max_multipart_parts` -: (integer) The maximum number of parts that {{es}} will write during a multipart upload of a single object. Files which are larger than `buffer_size × max_multipart_parts` will be chunked into several smaller objects. {{es}} may also split a file across multiple objects to satisfy other constraints such as the `chunk_size` limit. Defaults to `10000` which is the [maximum number of parts in a multipart upload in AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.md). +: (integer) The maximum number of parts that {{es}} will write during a multipart upload of a single object. Files which are larger than `buffer_size × max_multipart_parts` will be chunked into several smaller objects. {{es}} may also split a file across multiple objects to satisfy other constraints such as the `chunk_size` limit. Defaults to `10000` which is the [maximum number of parts in a multipart upload in AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html). `canned_acl` -: The S3 repository supports all [S3 canned ACLs](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.md#canned-acl) : `private`, `public-read`, `public-read-write`, `authenticated-read`, `log-delivery-write`, `bucket-owner-read`, `bucket-owner-full-control`. Defaults to `private`. You could specify a canned ACL using the `canned_acl` setting. When the S3 repository creates buckets and objects, it adds the canned ACL into the buckets and objects. +: The S3 repository supports all [S3 canned ACLs](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) : `private`, `public-read`, `public-read-write`, `authenticated-read`, `log-delivery-write`, `bucket-owner-read`, `bucket-owner-full-control`. Defaults to `private`. You could specify a canned ACL using the `canned_acl` setting. When the S3 repository creates buckets and objects, it adds the canned ACL into the buckets and objects. `storage_class` : Sets the S3 storage class for objects written to the repository. Values may be `standard`, `reduced_redundancy`, `standard_ia`, `onezone_ia` and `intelligent_tiering`. Defaults to `standard`. See [S3 storage classes](#repository-s3-storage-classes) for more information. `delete_objects_max_size` -: (integer) Sets the maxmimum batch size, betewen 1 and 1000, used for `DeleteObjects` requests. Defaults to 1000 which is the maximum number supported by the [AWS DeleteObjects API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.md). +: (integer) Sets the maxmimum batch size, betewen 1 and 1000, used for `DeleteObjects` requests. Defaults to 1000 which is the maximum number supported by the [AWS DeleteObjects API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html). `max_multipart_upload_cleanup_size` -: (integer) Sets the maximum number of possibly-dangling multipart uploads to clean up in each batch of snapshot deletions. Defaults to `1000` which is the maximum number supported by the [AWS ListMultipartUploads API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.md). If set to `0`, {{es}} will not attempt to clean up dangling multipart uploads. +: (integer) Sets the maximum number of possibly-dangling multipart uploads to clean up in each batch of snapshot deletions. Defaults to `1000` which is the maximum number supported by the [AWS ListMultipartUploads API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html). If set to `0`, {{es}} will not attempt to clean up dangling multipart uploads. `throttled_delete_retry.delay_increment` : ([time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) This value is used as the delay before the first retry and the amount the delay is incremented by on each subsequent retry. Default is 50ms, minimum is 0ms. @@ -231,7 +231,7 @@ The following settings are supported: `get_register_retry_delay` : ([time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Sets the time to wait before trying again if an attempt to read a [linearizable register](#repository-s3-linearizable-registers) fails. Defaults to `5s`. -::::{note} +::::{note} The option of defining client settings in the repository settings as documented below is considered deprecated, and will be removed in a future version. :::: @@ -267,7 +267,7 @@ You may use an S3 Lifecycle Policy to adjust the storage class of existing objec You may use the `intelligent_tiering` storage class to automatically manage the class of objects, but you must not enable the optional Archive Access or Deep Archive Access tiers. If you use these tiers then you may permanently lose access to your repository contents. -For more information about S3 storage classes, see [AWS Storage Classes Guide](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.md). +For more information about S3 storage classes, see [AWS Storage Classes Guide](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html). ## Recommended S3 permissions [repository-s3-permissions] @@ -352,7 +352,7 @@ You may further restrict the permissions by specifying a prefix within the bucke The bucket needs to exist to register a repository for snapshots. If you did not create the bucket then the repository registration will fail. -#### Using IAM roles for Kubernetes service accounts for authentication [iam-kubernetes-service-accounts] +#### Using IAM roles for Kubernetes service accounts for authentication [iam-kubernetes-service-accounts] If you want to use [Kubernetes service accounts](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/) for authentication, you need to add a symlink to the `$AWS_WEB_IDENTITY_TOKEN_FILE` environment variable (which should be automatically set by a Kubernetes pod) in the S3 repository config directory, so the repository can have the read access for the service account (a repository can’t read any files outside its config directory). For example: @@ -361,7 +361,7 @@ mkdir -p "${ES_PATH_CONF}/repository-s3" ln -s $AWS_WEB_IDENTITY_TOKEN_FILE "${ES_PATH_CONF}/repository-s3/aws-web-identity-token-file" ``` -::::{important} +::::{important} The symlink must be created on all data and master eligible nodes and be readable by the `elasticsearch` user. By default, {{es}} runs as user `elasticsearch` using uid:gid `1000:0`. :::: @@ -371,7 +371,7 @@ If the symlink exists, it will be used by default by all S3 repositories that do ## AWS VPC bandwidth settings [repository-s3-aws-vpc] -AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through the VPC’s NAT instance. If your VPC’s NAT instance is a smaller instance size (e.g. a t2.micro) or is handling a high volume of network traffic your bandwidth to S3 may be limited by that NAT instance’s networking bandwidth limitations. Instead we recommend creating a [VPC endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.md) that enables connecting to S3 in instances that reside in a private subnet in an AWS VPC. This will eliminate any limitations imposed by the network bandwidth of your VPC’s NAT instance. +AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through the VPC’s NAT instance. If your VPC’s NAT instance is a smaller instance size (e.g. a t2.micro) or is handling a high volume of network traffic your bandwidth to S3 may be limited by that NAT instance’s networking bandwidth limitations. Instead we recommend creating a [VPC endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) that enables connecting to S3 in instances that reside in a private subnet in an AWS VPC. This will eliminate any limitations imposed by the network bandwidth of your VPC’s NAT instance. Instances residing in a public subnet in an AWS VPC will connect to S3 via the VPC’s internet gateway and not be bandwidth limited by the VPC’s NAT instance. @@ -403,7 +403,7 @@ PUT /_cluster/settings } ``` -Collect the Elasticsearch logs covering the time period of the failed analysis from all nodes in your cluster and share them with the supplier of your storage system along with the analysis response so they can use them to determine the problem. See the [AWS Java SDK](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-../../monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md) documentation for further information, including details about other loggers that can be used to obtain even more verbose logs. When you have finished collecting the logs needed by your supplier, set the logger settings back to `null` to return to the default logging configuration and disable insecure network trace logging again. See [Logger](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-logger) and [Cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) for more information. +Collect the Elasticsearch logs covering the time period of the failed analysis from all nodes in your cluster and share them with the supplier of your storage system along with the analysis response so they can use them to determine the problem. See the [AWS Java SDK](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html) documentation for further information, including details about other loggers that can be used to obtain even more verbose logs. When you have finished collecting the logs needed by your supplier, set the logger settings back to `null` to return to the default logging configuration and disable insecure network trace logging again. See [Logger](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-logger) and [Cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) for more information. ## Linearizable register implementation [repository-s3-linearizable-registers] diff --git a/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md b/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md index cdeba0875c..02ec9af778 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md +++ b/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md @@ -34,6 +34,6 @@ To upgrade to 9.0.0-beta1 from 7.16 or an earlier version, **you must first upgr ## FIPS Compliance and Java 17 [upgrade-fips-java17] -{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.md)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. +{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.html)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. Alternatively, consider using {{ech}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). diff --git a/deploy-manage/users-roles/_snippets/org-vs-deploy-sso.md b/deploy-manage/users-roles/_snippets/org-vs-deploy-sso.md index 9fe33241ce..6bfa560b0f 100644 --- a/deploy-manage/users-roles/_snippets/org-vs-deploy-sso.md +++ b/deploy-manage/users-roles/_snippets/org-vs-deploy-sso.md @@ -1,4 +1,4 @@ -For {{ech}} deployments, you can configure SSO at the [organization level](/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md), the [deployment level](/deploy-manage/users-roles/cluster-or-deployment-auth.md), or both. +For {{ech}} deployments, you can configure SSO at the [organization level](/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md), the [deployment level](/deploy-manage/users-roles/cluster-or-deployment-auth.md), or both. The option that you choose depends on your requirements: @@ -6,7 +6,7 @@ The option that you choose depends on your requirements: | --- | --- | --- | | **Management experience** | Manage authentication and role mapping centrally for all deployments in the organization | Configure SSO for each deployment individually | | **Authentication protocols** | SAML only | Multiple protocols, including LDAP, OIDC, and SAML | -| **Role mapping** | [Organization-level roles and instance access roles](../../../deploy-manage/users-roles/cloud-organization/user-roles.md), Serverless project [custom roles](https://docs.elastic.co/serverless/custom-roles.md) | [Built-in](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) and [custom](../../../deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) stack-level roles | +| **Role mapping** | [Organization-level roles and instance access roles](../../../deploy-manage/users-roles/cloud-organization/user-roles.md), Serverless project [custom roles](/deploy-manage/users-roles/serverless-custom-roles.md) | [Built-in](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md) and [custom](../../../deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) stack-level roles | | **User experience** | Users interact with Cloud | Users interact with the deployment directly | If you want to avoid exposing users to the {{ecloud}} Console, or have users who only interact with some deployments, then you might prefer users to interact with your deployment directly. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md index 6e28afabf8..c47599d256 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md @@ -73,7 +73,7 @@ Before you set up a Kerberos realm, you must have the Kerberos infrastructure se Kerberos requires a lot of external services to function properly, such as time synchronization between all machines and working forward and reverse DNS mappings in your domain. Refer to your Kerberos documentation for more details. :::: -These instructions do not cover setting up and configuring your Kerberos deployment. Where examples are provided, they pertain to an MIT Kerberos V5 deployment. For more information, see [MIT Kerberos documentation](http://web.mit.edu/kerberos/www/index.md) +These instructions do not cover setting up and configuring your Kerberos deployment. Where examples are provided, they pertain to an MIT Kerberos V5 deployment. For more information, see [MIT Kerberos documentation](http://web.mit.edu/kerberos/www/index.html) If you're using a self-managed cluster, then perform the following additional steps: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md b/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md index 6217ff8cd1..14091b99ca 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md @@ -190,13 +190,13 @@ An **OpenID Connect Claim** is a piece of information asserted by the OP for the The RP requests specific scopes during the authentication request. If the OP Privacy Policy allows it and the authenticating user consents to it, the related claims are returned to the RP (either in the ID Token or as a UserInfo response). -The list of the supported claims will vary depending on the OP you are using, but [standard claims](https://openid.net/specs/openid-connect-core-1_0.md#StandardClaims) are usually supported. +The list of the supported claims will vary depending on the OP you are using, but [standard claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) are usually supported. ### How claims appear in user metadata [oidc-user-metadata] By default, users who authenticate through OpenID Connect have additional metadata fields. These fields include every OpenID claim that is provided in the authentication response, regardless of whether it is mapped to an {{es}} user property. -For example, in the metadata field `oidc(claim_name)`, "claim_name" is the name of the claim as it was contained in the ID Token or in the User Info response. Note that these will include all the [ID Token claims](https://openid.net/specs/openid-connect-core-1_0.md#IDToken) that pertain to the authentication event, rather than the user themselves. +For example, in the metadata field `oidc(claim_name)`, "claim_name" is the name of the claim as it was contained in the ID Token or in the User Info response. Note that these will include all the [ID Token claims](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) that pertain to the authentication event, rather than the user themselves. This behavior can be disabled by adding `populate_user_metadata: false` as a setting in the OIDC realm.