From 80b268fa086be14ab2c7cfa01e0f51a5c25deb72 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Fri, 11 Apr 2025 17:29:40 -0500 Subject: [PATCH] misc fixes --- .../autoscaling/autoscaling-in-eck.md | 4 +-- .../add-custom-bundles-plugins.md | 2 +- .../cloud-enterprise/configure-host-rhel.md | 4 +-- .../cloud-enterprise/configure-host-suse.md | 2 +- .../cloud-enterprise/configure-host-ubuntu.md | 2 +- .../ece-include-additional-kibana-plugin.md | 12 +++---- .../migrate-ece-to-podman-hosts.md | 4 +-- .../cloud-on-k8s/advanced-configuration.md | 2 +- .../deploy/cloud-on-k8s/configure-eck.md | 2 +- .../configure-validating-webhook.md | 26 +++++++------- .../deploy-eck-on-gke-autopilot.md | 2 +- .../elastic-stack-configuration-policies.md | 2 +- .../k8s-openshift-deploy-elasticsearch.md | 6 ++-- .../k8s-openshift-deploy-kibana.md | 2 +- .../managing-deployments-using-helm-chart.md | 26 +++++++------- .../deploy/cloud-on-k8s/node-configuration.md | 6 ++-- ...t-cross-namespace-resource-associations.md | 6 ++-- .../deploy/cloud-on-k8s/virtual-memory.md | 2 +- .../elastic-cloud/azure-native-isv-service.md | 2 +- .../self-managed/_snippets/ca-fingerprint.md | 4 +-- .../self-managed/_snippets/enroll-systemd.md | 4 +-- .../_snippets/systemd-startup-timeout.md | 4 +-- .../install-elasticsearch-docker-compose.md | 2 +- .../install-elasticsearch-with-rpm.md | 2 +- ...stall-elasticsearch-with-zip-on-windows.md | 8 ++--- .../self-managed/install-kibana-on-windows.md | 4 +-- .../self-managed/install-kibana-with-rpm.md | 2 +- .../kibana-reporting-configuration.md | 7 ++-- .../license/manage-your-license-in-eck.md | 4 +-- .../kibana-task-manager-health-monitoring.md | 2 +- .../configure-stack-monitoring-alerts.md | 34 +++++++++---------- .../optimize-performance/size-shards.md | 8 ++--- .../remote-clusters/eck-remote-clusters.md | 10 +++--- .../aws-privatelink-traffic-filters.md | 2 +- .../azure-private-link-traffic-filters.md | 2 +- ...private-service-connect-traffic-filters.md | 2 +- .../security/k8s-network-policies.md | 28 +++++++-------- .../snapshot-and-restore/cloud-on-k8s.md | 2 +- .../snapshot-and-restore/create-snapshots.md | 2 +- .../orchestrator/upgrade-cloud-on-k8s.md | 4 +-- ...ge-authentication-for-multiple-clusters.md | 2 +- .../oidc-examples.md | 2 +- .../cluster-or-deployment-auth/saml.md | 1 - ...icsearch-service-with-logstash-as-proxy.md | 22 +++++++----- reference/fleet/elastic-agent-inputs-list.md | 4 +-- reference/fleet/otel-agent-transform.md | 4 +-- reference/fleet/upgrade-elastic-agent.md | 8 ++--- .../tutorial-monitor-java-application.md | 2 +- .../observability/logs/stream-any-log-file.md | 4 +-- .../learning-to-rank-model-training.md | 2 +- troubleshoot/ingest/fleet/common-problems.md | 2 +- troubleshoot/kibana/capturing-diagnostics.md | 4 +-- troubleshoot/kibana/task-manager.md | 2 +- 53 files changed, 156 insertions(+), 154 deletions(-) diff --git a/deploy-manage/autoscaling/autoscaling-in-eck.md b/deploy-manage/autoscaling/autoscaling-in-eck.md index 4cb12bbcf5..b4c214870d 100644 --- a/deploy-manage/autoscaling/autoscaling-in-eck.md +++ b/deploy-manage/autoscaling/autoscaling-in-eck.md @@ -47,7 +47,7 @@ kind: ElasticsearchAutoscaler metadata: name: autoscaling-sample spec: - ## The name of the {{es}} cluster to be scaled automatically. + ## The name of the Elasticsearch cluster to be scaled automatically. elasticsearchRef: name: elasticsearch-sample ## The autoscaling policies. @@ -301,7 +301,7 @@ You should adjust those settings manually to match the size of your deployment w ## Autoscaling stateless applications on ECK [k8s-stateless-autoscaling] -::::{note} +::::{note} This section only applies to stateless applications. Check [{{es}} autoscaling](#k8s-autoscaling) for more details about automatically scaling {{es}}. :::: diff --git a/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md b/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md index c5438ba870..edc988f5fa 100644 --- a/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md +++ b/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md @@ -262,7 +262,7 @@ To import a JVM trust store: 1. The URL for the bundle ZIP file must be always available. Make sure you host the plugin artefacts internally in a highly available environment. 2. Wildcards are allowed here, since the certificates are independent from the {{es}} version. -4. (Optional) If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `{{es}} cluster data` section of the [advanced configuration](./advanced-cluster-configuration.md) page: +4. (Optional) If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `Elasticsearch cluster data` section of the [advanced configuration](./advanced-cluster-configuration.md) page: ```sh "jvm_trust_store": { diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md index 7120e27e6d..abed8ad678 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md @@ -72,7 +72,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec ``` 4. Install Podman: - + * For Podman 4 * Install the latest available version `4.*` using dnf. @@ -322,7 +322,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. # See /deploy-manage/deploy/self-managed/system-config-tcpretries.md net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md b/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md index e0f85efc5e..eb2ab5b95e 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md @@ -159,7 +159,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. # See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md index 00c416fdde..a5f2acf513 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md @@ -136,7 +136,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. # See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early diff --git a/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md b/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md index 9ba95caf0b..557b420ed8 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md @@ -17,7 +17,7 @@ The process involves two main steps: 2. [Update the {{stack}} pack included in your ECE installation to point to your modified Docker image.](#ece-modify-stack-pack) -## Before you begin [ece_before_you_begin_5] +## Before you begin [ece_before_you_begin_5] Note the following restrictions: @@ -27,7 +27,7 @@ Note the following restrictions: * The Dockerfile used in this example includes an optimization process that is relatively expensive and may require a machine with several GB of RAM to run successfully. -## Extend a {{kib}} Docker image to include additional plugins [ece-create-modified-docker-image] +## Extend a {{kib}} Docker image to include additional plugins [ece-create-modified-docker-image] This example runs a Dockerfile to install the [analyze_api_ui plugin](https://github.com/johtani/analyze-api-ui-plugin) or [kibana-enhanced-table](https://github.com/fbaligand/kibana-enhanced-table) into different versions of {{kib}} Docker image. The contents of the Dockerfile varies depending on the version of the {{stack}} pack that you want to modify. @@ -46,7 +46,7 @@ This example runs a Dockerfile to install the [analyze_api_ui plugin](https://gi * The version of the image * The plugin name and version number - ::::{important} + ::::{important} When you modify a {{kib}} Docker image, make sure you maintain the original image structure and only add the additional plugins. :::: @@ -73,7 +73,7 @@ This example runs a Dockerfile to install the [analyze_api_ui plugin](https://gi -## Modify the {{stack}} pack to point to your modified image [ece-modify-stack-pack] +## Modify the {{stack}} pack to point to your modified image [ece-modify-stack-pack] Follow these steps to update the {{stack}} pack zip files in your ECE setup to point to your modified Docker image: @@ -85,7 +85,7 @@ Follow these steps to update the {{stack}} pack zip files in your ECE setup to p set -eo pipefail - # Repack a stackpack to modify the {{kib}} image it points to + # Repack a stackpack to modify the Kibana image it points to NO_COLOR='\033[0m' ERROR_COLOR='\033[1;31m' @@ -152,7 +152,7 @@ Follow these steps to update the {{stack}} pack zip files in your ECE setup to p -## Common causes of problems [ece-custom-plugin-problems] +## Common causes of problems [ece-custom-plugin-problems] 1. If the custom Docker image is not available, make sure that the image has been uploaded to your Docker repository or loaded locally onto each ECE allocator. 2. If the container takes a long time to start, the problem might be that the `reoptimize` step in the Dockerfile did not complete successfully. diff --git a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md index a3bfd2a1c0..ad0680c3ec 100644 --- a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md @@ -102,7 +102,7 @@ Using Docker or Podman as container runtime is a configuration local to the host ``` 4. Install Podman: - + * For Podman 4 * Install the latest available version `4.*` using dnf. @@ -352,7 +352,7 @@ Using Docker or Podman as container runtime is a configuration local to the host vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. # See /deploy-manage/deploy/self-managed/system-config-tcpretries.md net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md index b0087fc2f3..f7cbaec810 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md @@ -160,7 +160,7 @@ Now that you know how to use the APM keystore and customize the server configura secret: defaultMode: 420 optional: false - secretName: es-ca # This is the secret that holds the {{es}} CA cert + secretName: es-ca # This is the secret that holds the Elasticsearch CA cert ``` diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md index a0c088145e..ec6b3eb55f 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md @@ -119,7 +119,7 @@ If you use [Operator Lifecycle Manager (OLM)](https://github.com/operator-framew * Update your [Subscription](https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/subscription-config.md) to mount the ConfigMap under `/conf`. - ```yaml + ```yaml subs=true apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md index 66c7861503..6b3182811e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md @@ -23,7 +23,7 @@ Validating webhooks are defined using a `ValidatingWebhookConfiguration` object * Failure policy if the webhook is unavailable (block the operation or continue without validation) -## Defaults provided by ECK [k8s-webhook-defaults] +## Defaults provided by ECK [k8s-webhook-defaults] When using the default `operator.yaml` manifest, ECK is installed with a `ValidatingWebhookConfiguration` configured as follows: @@ -32,12 +32,12 @@ When using the default `operator.yaml` manifest, ECK is installed with a `Valida * The operator generates a certificate for the webhook and stores it in a secret named `elastic-webhook-server-cert` in the `elastic-system` namespace. This certificate is automatically rotated by the operator when it is due to expire. -## Manual configuration [k8s-webhook-manual-config] +## Manual configuration [k8s-webhook-manual-config] If you installed ECK without the webhook and want to enable it later on, or if you want to customise the configuration such as providing your own certificates, this section describes the options available to you. -### Configuration options [k8s-webhook-config-options] +### Configuration options [k8s-webhook-config-options] You can customise almost all aspects of the webhook setup by changing the [operator configuration](configure-eck.md). @@ -51,7 +51,7 @@ You can customise almost all aspects of the webhook setup by changing the [opera | `webhook-port` | 9443 | Port to listen for incoming validation requests. | -### Using your own certificates [k8s-webhook-existing-certs] +### Using your own certificates [k8s-webhook-existing-certs] This section describes how you can use your own certificates for the webhook instead of letting the operator manage them automatically. There are a few important things to be aware of when going down this route: @@ -60,7 +60,7 @@ This section describes how you can use your own certificates for the webhook ins * You must update the `caBundle` fields in the `ValidatingWebhookConfiguration` yourself. This must be done at the beginning and whenever the certificate is rotated. -#### Use a certificate signed by your own CA [k8s-webhook-own-ca] +#### Use a certificate signed by your own CA [k8s-webhook-own-ca] * The certificate must have a Subject Alternative Name (SAN) of the form `..svc` (for example `elastic-webhook-server.elastic-system.svc`). A typical OpenSSL command to generate such a certificate would be as follows: @@ -81,7 +81,7 @@ This section describes how you can use your own certificates for the webhook ins * Set `webhook-secret` to the name of the secret you have just created (`elastic-webhook-server-custom-cert`) -::::{note} +::::{note} If you are using the [Helm chart installation method](install-using-helm-chart.md), you can install the operator by running this command: ```sh @@ -95,7 +95,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na -#### Use a certificate from cert-manager [k8s-webhook-cert-manager] +#### Use a certificate from cert-manager [k8s-webhook-cert-manager] This section describes how to use [cert-manager](https://cert-manager.io/) to manage the webhook certificate. It assumes that there is a `ClusterIssuer` named `self-signing-issuer` available. @@ -138,7 +138,7 @@ This section describes how to use [cert-manager](https://cert-manager.io/) to ma * Set `webhook-secret` to the name of the certificate secret (`elastic-webhook-server-cert`) -::::{note} +::::{note} If you are using the [Helm chart installation method](install-using-helm-chart.md), you can install the operator by running the following command: ```sh @@ -152,7 +152,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na -## Disable the webhook [k8s-disable-webhook] +## Disable the webhook [k8s-disable-webhook] To disable the webhook, set the [`enable-webhook`](configure-eck.md) operator configuration flag to `false` and remove the `ValidatingWebhookConfiguration` named `elastic-webhook.k8s.elastic.co`: @@ -161,12 +161,12 @@ kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io elas ``` -## Troubleshooting [k8s-webhook-troubleshooting] +## Troubleshooting [k8s-webhook-troubleshooting] You might get errors in your Kubernetes API server logs indicating that it cannot reach the operator service (`elastic-webhook-server`). This could be because no operator pods are available to handle request or because a network policy or a firewall rule is preventing the control plane from accessing the service. To help with troubleshooting, you can change the [`failurePolicy`](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy) of the webhook configuration to `Fail`. This will cause create or update operations to fail if there is an error contacting the webhook. Usually the error message will contain helpful information about the failure that will allow you to diagnose the root cause. -### Resource creation taking too long or timing out [k8s-webhook-troubleshooting-timeouts] +### Resource creation taking too long or timing out [k8s-webhook-troubleshooting-timeouts] Webhooks require network connectivity between the Kubernetes API server and the operator. If the creation of an {{es}} resource times out with an error message similar to the following, then the Kubernetes API server might be unable to connect to the webhook to validate the manifest. @@ -228,10 +228,10 @@ spec: ``` -### Updates failing due to validation errors [k8s-webhook-troubleshooting-validation-failure] +### Updates failing due to validation errors [k8s-webhook-troubleshooting-validation-failure] If your attempts to update a resource fail with an error message similar to the following, you can force the webhook to ignore it by removing the `kubectl.kubernetes.io/last-applied-configuration` annotation from your resource. -``` +```txt subs=true admission webhook "elastic-es-validation-v1.k8s.elastic.co" denied the request: {{es}}.elasticsearch.k8s.elastic.co "quickstart" is invalid: some-misspelled-field: Invalid value: "some-misspelled-field": some-misspelled-field field found in the kubectl.kubernetes.io/last-applied-configuration annotation is unknown ``` diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md index 3380e1dd3a..04ead2b45f 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md @@ -51,7 +51,7 @@ spec: # node.store.allow_mmap: false podTemplate: spec: - # This init container ensures that the `max_map_count` setting has been applied before starting {{es}}. + # This init container ensures that the `max_map_count` setting has been applied before starting Elasticsearch. # This is not required, but is encouraged when using the previously mentioned Daemonset to set max_map_count. # Do not use this if setting config.node.store.allow_mmap: false initContainers: diff --git a/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md b/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md index 6ebea5803d..e2e5999f0a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md +++ b/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md @@ -294,7 +294,7 @@ kubectl get -n b scp test-err-stack-config-policy -o jsonpath="{.status}" | jq . Important events are also reported through Kubernetes events, such as when two config policies conflict or you don’t have the appropriate license: ```sh -54s Warning Unexpected stackconfigpolicy/config-test conflict: resource {{es}} ns1/cluster-a already configured by StackConfigpolicy default/config-test-2 +54s Warning Unexpected stackconfigpolicy/config-test conflict: resource Elasticsearch ns1/cluster-a already configured by StackConfigpolicy default/config-test-2 ``` ```sh diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md index 830d404cc2..256bee2462 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md @@ -10,14 +10,14 @@ mapped_pages: Use the following code to create an {{es}} cluster `elasticsearch-sample` and a "passthrough" route to access it: -::::{note} +::::{note} A namespace other than the default namespaces (default, kube-system, kube-**, openshift-**, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. :::: ```shell cat <[-].) tls: - termination: passthrough # {{es}} is the TLS endpoint + termination: passthrough # Elasticsearch is the TLS endpoint insecureEdgeTerminationPolicy: Redirect to: kind: Service diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md index 1683687834..5b96e6be6c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md @@ -37,7 +37,7 @@ metadata: spec: #host: kibana.example.com # override if you don't want to use the host that is automatically generated by OpenShift ([-].) tls: - termination: passthrough # {{kib}} is the TLS endpoint + termination: passthrough # Kibana is the TLS endpoint insecureEdgeTerminationPolicy: Redirect to: kind: Service diff --git a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md index d2e6ff1528..e4a53fa629 100644 --- a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md @@ -16,7 +16,7 @@ helm repo add elastic https://helm.elastic.co helm repo update ``` -::::{note} +::::{note} The minimum supported version of Helm is 3.2.0. :::: @@ -28,54 +28,54 @@ The chart enables you to deploy the core components ({{es}} and {{kib}}) togethe All the provided examples deploy the applications in a namespace named `elastic-stack`. Consider adapting the commands to your use case. :::: -## {{es}} and {{kib}} [k8s-install-elasticsearch-kibana-helm] +## {{es}} and {{kib}} [k8s-install-elasticsearch-kibana-helm] Similar to the quickstart examples for {{es}} and {{kib}}, this section describes how to setup an {{es}} cluster with a simple {{kib}} instance managed by ECK, and how to customize a deployment using the eck-stack Helm chart’s values. ```sh -# Install an eck-managed {{es}} and {{kib}} using the default values, which deploys the quickstart examples. +# Install an eck-managed Elasticsearch and Kibana using the default values, which deploys the quickstart examples. helm install es-kb-quickstart elastic/eck-stack -n elastic-stack --create-namespace ``` -### Customize {{es}} and {{kib}} installation with example values [k8s-eck-stack-helm-customize] +### Customize {{es}} and {{kib}} installation with example values [k8s-eck-stack-helm-customize] You can find example Helm values files for deploying and managing more advanced {{es}} and {{kib}} setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-stack/examples). To use one or more of these example configurations, use the `--values` Helm option, as seen in the following section. ```sh -# Install an eck-managed {{es}} and {{kib}} using the {{es}} node roles example with hot, warm, and cold data tiers, and the {{kib}} example customizing the http service. +# Install an eck-managed Elasticsearch and Kibana using the Elasticsearch node roles example with hot, warm, and cold data tiers, and the Kibana example customizing the http service. helm install es-quickstart elastic/eck-stack -n elastic-stack --create-namespace \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/elasticsearch/hot-warm-cold.yaml \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/kibana/http-configuration.yaml ``` -## Fleet Server with Elastic Agents along with {{es}} and {{kib}} [k8s-install-fleet-agent-elasticsearch-kibana-helm] +## Fleet Server with Elastic Agents along with {{es}} and {{kib}} [k8s-install-fleet-agent-elasticsearch-kibana-helm] The following section builds upon the previous section, and allows installing Fleet Server, and Fleet-managed Elastic Agents along with {{es}} and {{kib}}. ```sh -# Install an eck-managed {{es}}, {{kib}}, Fleet Server, and managed Elastic Agents using custom values. +# Install an eck-managed Elasticsearch, Kibana, Fleet Server, and managed Elastic Agents using custom values. helm install eck-stack-with-fleet elastic/eck-stack \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/agent/fleet-agents.yaml -n elastic-stack ``` -## Logstash along with {{es}}, {{kib}} and Beats [k8s-install-logstash-elasticsearch-kibana-helm] +## Logstash along with {{es}}, {{kib}} and Beats [k8s-install-logstash-elasticsearch-kibana-helm] The following section builds upon the previous sections, and allows installing Logstash along with {{es}}, {{kib}} and Beats. ```sh -# Install an eck-managed {{es}}, {{kib}}, Beats and Logstash using custom values. +# Install an eck-managed Elasticsearch, Kibana, Beats and Logstash using custom values. helm install eck-stack-with-logstash elastic/eck-stack \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/logstash/basic-eck.yaml -n elastic-stack ``` -## Standalone Elastic APM Server along with {{es}} and {{kib}} [k8s-install-apm-server-elasticsearch-kibana-helm] +## Standalone Elastic APM Server along with {{es}} and {{kib}} [k8s-install-apm-server-elasticsearch-kibana-helm] The following section builds upon the previous sections, and allows installing a standalone Elastic APM Server along with {{es}} and {{kib}}. ```sh -# Install an eck-managed {{es}}, {{kib}}, and standalone APM Server using custom values. +# Install an eck-managed Elasticsearch, Kibana, and standalone APM Server using custom values. helm install eck-stack-with-apm-server elastic/eck-stack \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/apm-server/basic.yaml -n elastic-stack ``` @@ -103,10 +103,10 @@ helm install es-quickstart elastic/eck-stack -n elastic-stack --create-namespace helm install es-quickstart elastic/eck-elasticsearch -n elastic-stack --create-namespace ``` -## Adding Ingress to the {{stack}} [k8s-eck-stack-ingress] +## Adding Ingress to the {{stack}} [k8s-eck-stack-ingress] :::{admonition} Support scope for Ingress Controllers -[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. +[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. If ingress configuration is challenging or unsupported in your environment, consider using standard `LoadBalancer` services as a simpler alternative. ::: diff --git a/deploy-manage/deploy/cloud-on-k8s/node-configuration.md b/deploy-manage/deploy/cloud-on-k8s/node-configuration.md index bc45f32765..c630b9161d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/node-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/node-configuration.md @@ -18,14 +18,14 @@ spec: - name: masters count: 3 config: - # On {{es}} versions before 7.9.0, replace the node.roles configuration with the following: + # On Elasticsearch versions before 7.9.0, replace the node.roles configuration with the following: # node.master: true node.roles: ["master"] xpack.ml.enabled: true - name: data count: 10 config: - # On {{es}} versions before 7.9.0, replace the node.roles configuration with the following: + # On Elasticsearch versions before 7.9.0, replace the node.roles configuration with the following: # node.master: false # node.data: true # node.ingest: true @@ -34,7 +34,7 @@ spec: node.roles: ["data", "ingest", "ml", "transform"] ``` -::::{warning} +::::{warning} ECK parses {{es}} configuration and normalizes it to YAML. Consequently, some {{es}} configuration schema are impossible to express with ECK and, therefore, must be set using [dynamic cluster settings](/deploy-manage/deploy/self-managed/configure-elasticsearch.md#cluster-setting-types). For example: ```yaml spec: diff --git a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md index 3e5b1476c9..8efeb620d0 100644 --- a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md +++ b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md @@ -16,7 +16,7 @@ The enforcement of access control rules for cross-namespace associations is disa Associations are allowed as long as the `ServiceAccount` used by the associated resource can execute HTTP `GET` requests against the referenced {{es}} object. -::::{important} +::::{important} ECK automatically removes any associations that do not have the correct access rights. If you have existing associations, do not enable this feature without creating the required `Roles` and `RoleBindings` as described in the following sections. :::: @@ -74,14 +74,14 @@ To enable the restriction of cross-namespace associations, start the operator wi elasticsearchRef: name: "elasticsearch-sample" namespace: "elasticsearch-ns" - # Service account used by this resource to get access to an {{es}} cluster + # Service account used by this resource to get access to an Elasticsearch cluster serviceAccountName: associated-resource-sa ``` In this example, `associated-resource` can be of any `Kind` that requires an association to be created, for example `Kibana` or `ApmServer`. You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/2.16/config/recipes/associations-rbac/apm_es_kibana_rbac.yaml). -::::{note} +::::{note} If the `serviceAccountName` is not set, ECK uses the default service account assigned to the pod by the [Service Account Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller). :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md b/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md index 1a588a1d5b..3a0ae3350a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md +++ b/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md @@ -111,7 +111,7 @@ spec: # node.store.allow_mmap: false podTemplate: spec: - # This init container ensures that the `max_map_count` setting has been applied before starting {{es}}. + # This init container ensures that the `max_map_count` setting has been applied before starting Elasticsearch. # This is not required, but is encouraged when using the previous Daemonset to set max_map_count. # Do not use this if setting config.node.store.allow_mmap: false initContainers: diff --git a/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md b/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md index b96cc249d0..0fe3408186 100644 --- a/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md +++ b/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md @@ -25,7 +25,7 @@ The {{ecloud}} Azure Native ISV Service allows you to deploy managed instances o ::::{tip} -The full product name in the Azure integrated marketplace is `{{ecloud}} (Elasticsearch) - An Azure Native ISV Service`. +The full product name in the Azure integrated marketplace is _{{ecloud}} (Elasticsearch) - An Azure Native ISV Service_. :::: diff --git a/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md b/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md index 413d05829d..493e2d5fff 100644 --- a/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md +++ b/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md @@ -6,9 +6,9 @@ If the auto-configuration process already completed, you can still obtain the fi openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt ``` -The command returns the security certificate, including the fingerprint. The `issuer` should be `{{es}} security auto-configuration HTTP CA`. +The command returns the security certificate, including the fingerprint. The `issuer` should be `Elasticsearch security auto-configuration HTTP CA`. ```sh -issuer= /CN={{es}} security auto-configuration HTTP CA +issuer= /CN=Elasticsearch security auto-configuration HTTP CA SHA256 Fingerprint= ``` \ No newline at end of file diff --git a/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md b/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md index e0cbb45659..1b8d52e21d 100644 --- a/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md +++ b/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md @@ -9,11 +9,11 @@ * A host address to access {{kib}} * A six digit verification code - + For example: ```sh - {{kib}} has not been configured. + Kibana has not been configured. Go to http://:5601/?code= to get started. ``` diff --git a/deploy-manage/deploy/self-managed/_snippets/systemd-startup-timeout.md b/deploy-manage/deploy/self-managed/_snippets/systemd-startup-timeout.md index 9c58340ef7..33f3baafe0 100644 --- a/deploy-manage/deploy/self-managed/_snippets/systemd-startup-timeout.md +++ b/deploy-manage/deploy/self-managed/_snippets/systemd-startup-timeout.md @@ -14,11 +14,11 @@ Versions of `systemd` prior to 238 do not support the timeout extension mechanis However the `systemd` logs will report that the startup timed out: ```text -Jan 31 01:22:30 debian systemd[1]: Starting {{es}}... +Jan 31 01:22:30 debian systemd[1]: Starting Elasticsearch... Jan 31 01:37:15 debian systemd[1]: elasticsearch.service: Start operation timed out. Terminating. Jan 31 01:37:15 debian systemd[1]: elasticsearch.service: Main process exited, code=killed, status=15/TERM Jan 31 01:37:15 debian systemd[1]: elasticsearch.service: Failed with result 'timeout'. -Jan 31 01:37:15 debian systemd[1]: Failed to start {{es}}. +Jan 31 01:37:15 debian systemd[1]: Failed to start Elasticsearch. ``` To avoid this, upgrade your `systemd` to at least version 238. You can also temporarily work around the problem by extending the `TimeoutStartSec` parameter. diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md b/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md index 89a0a61427..f443916a38 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md @@ -54,7 +54,7 @@ Use Docker Compose to start a three-node {{es}} cluster with {{kib}}. Docker Com ```txt ... - # Port to expose {{es}} HTTP API to the host + # Port to expose Elasticsearch HTTP API to the host #ES_PORT=9200 ES_PORT=127.0.0.1:9200 ... diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md index 0d95ebc2c4..0f1234553a 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md @@ -55,7 +55,7 @@ You have several options for installing the {{es}} RPM package: Create a file called `elasticsearch.repo` in the `/etc/yum.repos.d/` directory for RedHat based distributions, or in the `/etc/zypp/repos.d/` directory for OpenSuSE based distributions, containing: -```ini +```ini subs=true [elasticsearch] name={{es}} repository for 9.x packages baseurl=https://artifacts.elastic.co/packages/9.x/yum diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md index 1989855c74..68134e2ebf 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-zip-on-windows.md @@ -110,8 +110,8 @@ You can install {{es}} as a service that runs in the background or starts automa ```sh subs=true C:\Program Files\elasticsearch-{{stack-version}}\bin>elasticsearch-service.bat install ``` - - Response: + + Response: ``` Installing service : "elasticsearch-service-x64" Using ES_JAVA_HOME (64-bit): "C:\jvm\jdk1.8" @@ -183,8 +183,8 @@ The {{es}} service can be configured prior to installation by setting the follow | `SERVICE_ID` | A unique identifier for the service. Useful if installing multiple instances on the same machine. Defaults to `elasticsearch-service-x64`. | | `SERVICE_USERNAME` | The user to run as, defaults to the local system account. | | `SERVICE_PASSWORD` | The password for the user specified in `%SERVICE_USERNAME%`. | -| `SERVICE_DISPLAY_NAME` | The name of the service. Defaults to `{{es}} %SERVICE_ID%`. | -| `SERVICE_DESCRIPTION` | The description of the service. Defaults to `{{es}} Windows Service - https://elastic.co`. | +| `SERVICE_DISPLAY_NAME` | The name of the service. Defaults to `Elasticsearch %SERVICE_ID%`. | +| `SERVICE_DESCRIPTION` | The description of the service. Defaults to `Elasticsearch Windows Service - https://elastic.co`. | | `ES_JAVA_HOME` | The installation directory of the desired JVM to run the service under. | | `SERVICE_LOG_DIR` | Service log directory, defaults to `%ES_HOME%\logs`. Note that this does not control the path for the {{es}} logs; the path for these is set via the setting `path.logs` in the `elasticsearch.yml` configuration file, or on the command line. | | `ES_PATH_CONF` | Configuration file directory (which needs to include `elasticsearch.yml`, `jvm.options`, and `log4j2.properties` files), defaults to `%ES_HOME%\config`. | diff --git a/deploy-manage/deploy/self-managed/install-kibana-on-windows.md b/deploy-manage/deploy/self-managed/install-kibana-on-windows.md index e1d71480a0..ac55f2e036 100644 --- a/deploy-manage/deploy/self-managed/install-kibana-on-windows.md +++ b/deploy-manage/deploy/self-managed/install-kibana-on-windows.md @@ -71,8 +71,8 @@ This is very convenient because you don’t have to create any directories to st | home | {{kib}} home directory or `$KIBANA_HOME` | Directory created by unpacking the archive | | | bin | Binary scripts including `kibana` to start the {{kib}} server and `kibana-plugin` to install plugins | `$KIBANA_HOME\bin` | | | config | Configuration files including `kibana.yml` | `$KIBANA_HOME\config` | `[KBN_PATH_CONF](configure-kibana.md)` | -| | data | `The location of the data files written to disk by {{kib}} and its plugins` | `$KIBANA_HOME\data` | -| | plugins | `Plugin files location. Each plugin will be contained in a subdirectory.` | `$KIBANA_HOME\plugins` | +| | data | The location of the data files written to disk by {{kib}} and its plugins | `$KIBANA_HOME\data` | +| | plugins | Plugin files location. Each plugin will be contained in a subdirectory. | `$KIBANA_HOME\plugins` | ## Next steps diff --git a/deploy-manage/deploy/self-managed/install-kibana-with-rpm.md b/deploy-manage/deploy/self-managed/install-kibana-with-rpm.md index 3beec4b484..6e097a77c1 100644 --- a/deploy-manage/deploy/self-managed/install-kibana-with-rpm.md +++ b/deploy-manage/deploy/self-managed/install-kibana-with-rpm.md @@ -43,7 +43,7 @@ You have the following options for installing the {{es}} RPM package: Create a file called `kibana.repo` in the `/etc/yum.repos.d/` directory for RedHat based distributions, or in the `/etc/zypp/repos.d/` directory for OpenSuSE based distributions, containing: -```sh +```sh subs=true [kibana-9.X] name={{kib}} repository for 9.x packages baseurl=https://artifacts.elastic.co/packages/9.x/yum diff --git a/deploy-manage/kibana-reporting-configuration.md b/deploy-manage/kibana-reporting-configuration.md index 4864f90e44..8a507df5fe 100644 --- a/deploy-manage/kibana-reporting-configuration.md +++ b/deploy-manage/kibana-reporting-configuration.md @@ -99,16 +99,15 @@ When security is enabled, you grant users access to {{report-features}} with [{{ If you have a Basic license, sub-feature privileges are unavailable. ::: + :::{note} + If the **Reporting** options for application features are unavailable, and the cluster license is higher than Basic, contact your administrator. + ::: :::{image} /deploy-manage/images/kibana-kibana-privileges-with-reporting.png :alt: {{kib}} privileges with Reporting options, Gold or higher license :screenshot: ::: - :::{note} - If the **Reporting** options for application features are unavailable, and the cluster license is higher than Basic, contact your administrator. - ::: - 5. Click **Add {{kib}} privilege**. 4. Click **Create role**. diff --git a/deploy-manage/license/manage-your-license-in-eck.md b/deploy-manage/license/manage-your-license-in-eck.md index a434c085de..9751cab516 100644 --- a/deploy-manage/license/manage-your-license-in-eck.md +++ b/deploy-manage/license/manage-your-license-in-eck.md @@ -151,10 +151,10 @@ elastic_licensing_enterprise_resource_units_total{license_level="enterprise"} 1 # HELP elastic_licensing_memory_gibibytes_apm Memory used by APM server in GiB # TYPE elastic_licensing_memory_gibibytes_apm gauge elastic_licensing_memory_gibibytes_apm{license_level="enterprise"} 0.5 -# HELP elastic_licensing_memory_gibibytes_elasticsearch Memory used by {{es}} in GiB +# HELP elastic_licensing_memory_gibibytes_elasticsearch Memory used by Elasticsearch in GiB # TYPE elastic_licensing_memory_gibibytes_elasticsearch gauge elastic_licensing_memory_gibibytes_elasticsearch{license_level="enterprise"} 18 -# HELP elastic_licensing_memory_gibibytes_kibana Memory used by {{kib}} in GiB +# HELP elastic_licensing_memory_gibibytes_kibana Memory used by Kibana in GiB # TYPE elastic_licensing_memory_gibibytes_kibana gauge elastic_licensing_memory_gibibytes_kibana{license_level="enterprise"} 1 # HELP elastic_licensing_memory_gibibytes_logstash Memory used by Logstash in GiB diff --git a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md index dce450e4a6..267f76d5be 100644 --- a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md +++ b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md @@ -85,7 +85,7 @@ By default, the health API runs at a regular cadence, and each time it runs, it This message looks like: -```txt +```txt subs=true Detected potential performance issue with Task Manager. Set 'xpack.task_manager.monitored_stats_health_verbose_log.enabled: true' in your {{kib}}.yml to enable debug logging` ``` diff --git a/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md b/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md index d24a87f331..549819e86c 100644 --- a/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md +++ b/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md @@ -43,7 +43,7 @@ To review and modify existing **{{stack-monitor-app}}** rules, click **Enter set Alternatively, to manage all rules, including create and delete functionality go to **{{stack-manage-app}} > {{rules-ui}}**. ::: -1. On any card showing available alerts, select the **alerts** indicator. Use the menu to select the type of alert for which you’d like to be notified. +1. On any card showing available alerts, select the **alerts** indicator. Use the menu to select the type of alert for which you’d like to be notified. 2. In the **Edit rule** pane, set how often to check for the condition and how often to send notifications. 3. In the **Actions** section, select the connector that you'd like to use for notifications. 4. Configure the connector message contents and select **Save**. @@ -52,70 +52,70 @@ Alternatively, to manage all rules, including create and delete functionality go The following rules are [preconfigured](#_create_default_rules) for stack monitoring. -:::{dropdown} CPU usage threshold +:::{dropdown} CPU usage threshold $$$kibana-alerts-cpu-threshold$$$ -This rule checks for {{es}} nodes that run a consistently high CPU load. +This rule checks for {{es}} nodes that run a consistently high CPU load. By default, the condition is set at 85% or more averaged over the last 5 minutes. The default rule checks on a schedule time of 1 minute with a re-notify interval of 1 day. ::: -:::{dropdown} Disk usage threshold +:::{dropdown} Disk usage threshold $$$kibana-alerts-disk-usage-threshold$$$ -This rule checks for {{es}} nodes that are nearly at disk capacity. +This rule checks for {{es}} nodes that are nearly at disk capacity. By default, the condition is set at 80% or more averaged over the last 5 minutes. The default rule checks on a schedule time of 1 minute with a re-notify interval of 1 day. ::: -:::{dropdown} JVM memory threshold +:::{dropdown} JVM memory threshold $$$kibana-alerts-jvm-memory-threshold$$$ -This rule checks for {{es}} nodes that use a high amount of JVM memory. +This rule checks for {{es}} nodes that use a high amount of JVM memory. By default, the condition is set at 85% or more averaged over the last 5 minutes. The default rule checks on a schedule time of 1 minute with a re-notify interval of 1 day. ::: -:::{dropdown} Missing monitoring data +:::{dropdown} Missing monitoring data $$$kibana-alerts-missing-monitoring-data$$$ -This rule checks for {{es}} nodes that stop sending monitoring data. +This rule checks for {{es}} nodes that stop sending monitoring data. By default, the condition is set to missing for 15 minutes looking back 1 day. The default rule checks on a schedule time of 1 minute with a re-notify interval of 6 hours. ::: -:::{dropdown} Thread pool rejections (search/write) +:::{dropdown} Thread pool rejections (search/write) $$$kibana-alerts-thread-pool-rejections$$$ -This rule checks for {{es}} nodes that experience thread pool rejections. +This rule checks for {{es}} nodes that experience thread pool rejections. By default, the condition is set at 300 or more over the last 5 minutes. The default rule checks on a schedule time of 1 minute with a re-notify interval of 1 day. Thresholds can be set independently for `search` and `write` type rejections. ::: -:::{dropdown} CCR read exceptions +:::{dropdown} CCR read exceptions $$$kibana-alerts-ccr-read-exceptions$$$ -This rule checks for read exceptions on any of the replicated {{es}} clusters. +This rule checks for read exceptions on any of the replicated {{es}} clusters. The condition is met if 1 or more read exceptions are detected in the last hour. The default rule checks on a schedule time of 1 minute with a re-notify interval of 6 hours. ::: -:::{dropdown} Large shard size +:::{dropdown} Large shard size $$$kibana-alerts-large-shard-size$$$ -This rule checks for a large average shard size (across associated primaries) on any of the specified data views in an {{es}} cluster. +This rule checks for a large average shard size (across associated primaries) on any of the specified data views in an {{es}} cluster. The condition is met if an index’s average shard size is 55gb or higher in the last 5 minutes. The default rule matches the pattern of `-.*` by running checks on a schedule time of 1 minute with a re-notify interval of 12 hours. ::: -::::{dropdown} Cluster alerting +::::{dropdown} Cluster alerting $$$kibana-alerts-cluster-alerts$$$ @@ -142,6 +142,6 @@ An action is triggered if any of the following conditions are met within the las The 60-day and 30-day thresholds are skipped for Trial licenses, which are only valid for 30 days. :::{note} -For the `{{es}} nodes changed` alert, if you have only one master node in your cluster, during the master node vacate no notification will be sent. {{kib}} needs to communicate with the master node in order to send a notification. One way to avoid this is by shipping your deployment metrics to a dedicated monitoring cluster. +For the `Elasticsearch nodes changed` alert, if you have only one master node in your cluster, during the master node vacate no notification will be sent. {{kib}} needs to communicate with the master node in order to send a notification. One way to avoid this is by shipping your deployment metrics to a dedicated monitoring cluster. ::: :::: \ No newline at end of file diff --git a/deploy-manage/production-guidance/optimize-performance/size-shards.md b/deploy-manage/production-guidance/optimize-performance/size-shards.md index 47c7c8137a..616e7b12b0 100644 --- a/deploy-manage/production-guidance/optimize-performance/size-shards.md +++ b/deploy-manage/production-guidance/optimize-performance/size-shards.md @@ -29,13 +29,13 @@ To avoid either of these states, implement the following guidelines: ### Shard distribution guidelines -To ensure that each node is working optimally, distribute shards evenly across nodes. Uneven distribution can cause some nodes to work harder than others, leading to performance degradation and instability. +To ensure that each node is working optimally, distribute shards evenly across nodes. Uneven distribution can cause some nodes to work harder than others, leading to performance degradation and instability. While {{es}} automatically balances shards, you need to configure indices with an appropriate number of shards and replicas to allow for even distribution across nodes. -If you are using [data streams](/manage-data/data-store/data-streams.md), each data stream is backed by a sequence of indices, each index potentially having multiple shards. +If you are using [data streams](/manage-data/data-store/data-streams.md), each data stream is backed by a sequence of indices, each index potentially having multiple shards. -Despite these general guidelines, it is good to develop a tailored [sharding strategy](#create-a-sharding-strategy) that considers your specific infrastructure, use case, and performance expectations. +Despite these general guidelines, it is good to develop a tailored [sharding strategy](#create-a-sharding-strategy) that considers your specific infrastructure, use case, and performance expectations. ## Create a sharding strategy [create-a-sharding-strategy] @@ -377,7 +377,7 @@ See this [fixing "max shards open" video](https://www.youtube.com/watch?v=tZKbDe Each {{es}} shard is a separate Lucene index, so it shares Lucene’s [`MAX_DOC` limit](https://github.com/apache/lucene/issues/5176) of having at most 2,147,483,519 (`(2^31)-129`) documents. This per-shard limit applies to the sum of `docs.count` plus `docs.deleted` as reported by the [Index stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-stats). Exceeding this limit will result in errors like the following: -```txt +```txt subs=true {{es}} exception [type=illegal_argument_exception, reason=Number of documents in the shard cannot exceed [2147483519]] ``` diff --git a/deploy-manage/remote-clusters/eck-remote-clusters.md b/deploy-manage/remote-clusters/eck-remote-clusters.md index f28eddc1ae..03b071f79c 100644 --- a/deploy-manage/remote-clusters/eck-remote-clusters.md +++ b/deploy-manage/remote-clusters/eck-remote-clusters.md @@ -39,7 +39,7 @@ To enable the API key security model you must first enable the remote cluster se ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: {{es}} +kind: Elasticsearch metadata: name: cluster-two namespace: ns-two @@ -63,7 +63,7 @@ Permissions have to be included under the `apiKey` field. The API model of the { ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: {{es}} +kind: Elasticsearch metadata: name: cluster-one namespace: ns-one @@ -99,7 +99,7 @@ The following example describes how to configure `cluster-two` as a remote clust ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: {{es}} +kind: Elasticsearch metadata: name: cluster-one namespace: ns-one @@ -172,7 +172,7 @@ If `cluster-two` is also managed by an ECK instance, proceed as follows: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 - kind: {{es}} + kind: Elasticsearch metadata: name: cluster-two spec: @@ -195,7 +195,7 @@ Expose the transport layer of `cluster-one`. ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: {{es}} +kind: Elasticsearch metadata: name: cluster-one spec: diff --git a/deploy-manage/security/aws-privatelink-traffic-filters.md b/deploy-manage/security/aws-privatelink-traffic-filters.md index 07e79ebd65..dcba8d29a5 100644 --- a/deploy-manage/security/aws-privatelink-traffic-filters.md +++ b/deploy-manage/security/aws-privatelink-traffic-filters.md @@ -279,7 +279,7 @@ Response: ::::{note} If you are using AWS PrivateLink together with Fleet, and enrolling the Elastic Agent with a PrivateLink URL, you need to configure Fleet Server to use and propagate the PrivateLink URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of {{kib}}. Otherwise, Elastic Agent will reset to use a default address instead of the PrivateLink URL. The URL needs to follow this pattern: `https://.fleet.:443`. -Similarly, the {{es}} host needs to be updated to propagate the Privatelink URL. The {{es}} URL needs to follow this pattern: `https://<{{es}} cluster ID/deployment alias>.es.:443`. +Similarly, the {{es}} host needs to be updated to propagate the Privatelink URL. The {{es}} URL needs to follow this pattern: `https://.es.:443`. The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` that are needed to enable this configuration in {{kib}} are currently available on-prem only, and not in the [{{kib}} settings in {{ecloud}}](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). diff --git a/deploy-manage/security/azure-private-link-traffic-filters.md b/deploy-manage/security/azure-private-link-traffic-filters.md index f17c2b0e2e..346bee2afc 100644 --- a/deploy-manage/security/azure-private-link-traffic-filters.md +++ b/deploy-manage/security/azure-private-link-traffic-filters.md @@ -262,7 +262,7 @@ Response: ::::{note} If you are using Azure Private Link together with Fleet, and enrolling the Elastic Agent with a Private Link URL, you need to configure Fleet Server to use and propagate the Private Link URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of {{kib}}. Otherwise, Elastic Agent will reset to use a default address instead of the Private Link URL. The URL needs to follow this pattern: `https://.fleet.:443`. -Similarly, the {{es}} host needs to be updated to propagate the Private Link URL. The {{es}} URL needs to follow this pattern: `https://<{{es}} cluster ID/deployment alias>.es.:443`. +Similarly, the {{es}} host needs to be updated to propagate the Private Link URL. The {{es}} URL needs to follow this pattern: `https://.es.:443`. :::: diff --git a/deploy-manage/security/gcp-private-service-connect-traffic-filters.md b/deploy-manage/security/gcp-private-service-connect-traffic-filters.md index 8b6035ce3c..e46e6b92a9 100644 --- a/deploy-manage/security/gcp-private-service-connect-traffic-filters.md +++ b/deploy-manage/security/gcp-private-service-connect-traffic-filters.md @@ -224,7 +224,7 @@ Response: ::::{note} If you are using Private Service Connect together with Fleet, and enrolling the Elastic Agent with a Private Service Connect URL, you need to configure Fleet Server to use and propagate the Private Service Connect URL by updating the **Fleet Server hosts** field in the **Fleet settings** section of {{kib}}. Otherwise, Elastic Agent will reset to use a default address instead of the Private Service Connect URL. The URL needs to follow this pattern: `https://.fleet.:443`. -Similarly, the {{es}} host needs to be updated to propagate the Private Service Connect URL. The {{es}} URL needs to follow this pattern: `https://<{{es}} cluster ID/deployment alias>.es.:443`. +Similarly, the {{es}} host needs to be updated to propagate the Private Service Connect URL. The {{es}} URL needs to follow this pattern: `https://.es.:443`. The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` that are needed to enable this configuration in {{kib}} are currently available on-prem only, and not in the [{{kib}} settings in {{ecloud}}](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). diff --git a/deploy-manage/security/k8s-network-policies.md b/deploy-manage/security/k8s-network-policies.md index a6287426f2..8baf7d2661 100644 --- a/deploy-manage/security/k8s-network-policies.md +++ b/deploy-manage/security/k8s-network-policies.md @@ -19,7 +19,7 @@ Note that network policies alone are not sufficient for security. You should com {{eck}} also supports [IP traffic filtering](/deploy-manage/security/ip-filtering-basic.md). ::: -::::{note} +::::{note} There are several efforts to support multi-tenancy on Kubernetes, including the [official working group for multi-tenancy](https://github.com/kubernetes-sigs/multi-tenancy) and community extensions such as [loft](https://loft.sh) and [kiosk](https://github.com/kiosk-sh/kiosk), that can make configuration and management easier. You might need to employ network policies such the ones described in this section to have fine-grained control over {{stack}} applications deployed by your tenants. :::: @@ -44,7 +44,7 @@ The operator Pod label depends on how the operator has been installed. Check the | YAML manifests | `control-plane: elastic-operator`
| | Helm Charts | `app.kubernetes.io/name: elastic-operator`
| -::::{note} +::::{note} The examples in this section assume that the ECK operator has been installed using the Helm chart. :::: @@ -52,11 +52,11 @@ The examples in this section assume that the ECK operator has been installed usi Run `kubectl get endpoints kubernetes -n default` to obtain the API server IP address for your cluster. -::::{note} +::::{note} The following examples assume that the Kubernetes API server IP address is `10.0.0.1`. :::: -## Isolating the operator [k8s-network-policies-operator-isolation] +## Isolating the operator [k8s-network-policies-operator-isolation] The minimal set of permissions required are as follows: @@ -109,7 +109,7 @@ spec: ``` -## Isolating {{es}} [k8s-network-policies-elasticsearch-isolation] +## Isolating {{es}} [k8s-network-policies-elasticsearch-isolation] | | | | --- | --- | @@ -171,7 +171,7 @@ spec: ``` -## Isolating {{kib}} [k8s-network-policies-kibana-isolation] +## Isolating {{kib}} [k8s-network-policies-kibana-isolation] | | | | --- | --- | @@ -201,7 +201,7 @@ spec: - ports: - port: 53 protocol: UDP - # [Optional] If Agent is deployed, this is to allow {{kib}} to access the Elastic Package Registry (https://epr.elastic.co). + # [Optional] If Agent is deployed, this is to allow Kibana to access the Elastic Package Registry (https://epr.elastic.co). # - port: 443 # protocol: TCP ingress: @@ -222,7 +222,7 @@ spec: ``` -## Isolating APM Server [k8s-network-policies-apm-server-isolation] +## Isolating APM Server [k8s-network-policies-apm-server-isolation] | | | | --- | --- | @@ -277,9 +277,9 @@ spec: common.k8s.elastic.co/type: apm-server ``` -## Isolating Beats [k8s-network-policies-beats-isolation] +## Isolating Beats [k8s-network-policies-beats-isolation] -::::{note} +::::{note} Some {{beats}} may require additional access rules than what is listed here. For example, {{heartbeat}} will require a rule to allow access to the endpoint it is monitoring. :::: @@ -325,9 +325,9 @@ spec: ``` -## Isolating Elastic Agent and Fleet [k8s-network-policies-agent-isolation] +## Isolating Elastic Agent and Fleet [k8s-network-policies-agent-isolation] -::::{note} +::::{note} Some {{agent}} policies may require additional access rules other than those listed here. :::: @@ -396,9 +396,9 @@ spec: common.k8s.elastic.co/type: agent ``` -## Isolating Logstash [k8s-network-policies-logstash-isolation] +## Isolating Logstash [k8s-network-policies-logstash-isolation] -::::{note} +::::{note} {{ls}} may require additional access rules than those listed here, depending on plugin usage. :::: diff --git a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md index e78b7fb830..493c06483f 100644 --- a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md +++ b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md @@ -79,7 +79,7 @@ Using ECK, you can automatically inject secure settings into a cluster node by p name: elasticsearch-sample spec: version: 8.16.1 - # Inject secure settings into {{es}} nodes from a k8s secret reference + # Inject secure settings into Elasticsearch nodes from a k8s secret reference secureSettings: - secretName: gcs-credentials ``` diff --git a/deploy-manage/tools/snapshot-and-restore/create-snapshots.md b/deploy-manage/tools/snapshot-and-restore/create-snapshots.md index 707546b6d4..9071f8f244 100644 --- a/deploy-manage/tools/snapshot-and-restore/create-snapshots.md +++ b/deploy-manage/tools/snapshot-and-restore/create-snapshots.md @@ -282,7 +282,7 @@ The API returns: }, { "name": "kibana", - "description": "Manages {{kib}} configuration and reports" + "description": "Manages Kibana configuration and reports" }, { "name": "security", diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md index 8711c98f5b..bdc59573ca 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md @@ -3,7 +3,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-upgrading-eck.html applies_to: deployment: - eck: ga 3.0.0 + eck: ga 3.0.0 --- # Upgrade {{eck}} [k8s-upgrading-eck] @@ -118,7 +118,7 @@ Exclude Elastic resources from being managed by the operator: ```shell ANNOTATION='eck.k8s.elastic.co/managed=false' -# Exclude a single {{es}} resource named "quickstart" +# Exclude a single Elasticsearch resource named "quickstart" kubectl annotate --overwrite elasticsearch quickstart $ANNOTATION # Exclude all resources in the current namespace diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md index 9ead7ddea7..b84a15adf1 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md @@ -327,7 +327,7 @@ spec: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: {{es}} +kind: Elasticsearch metadata: name: quickstart namespace: kvalliy diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md b/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md index 6549e45399..a694eee5dc 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md @@ -285,7 +285,7 @@ For more information about OpenID connect in Okta, refer to [Okta OAuth 2.0 docu ::: 2. For the **Platform** page settings, select **Web** then **Next**. - 3. In the **Application settings** choose a **Name** for your application, for example `{{kib}} OIDC`. + 3. In the **Application settings** choose a **Name** for your application, for example _{{kib}} OIDC_. 4. Set the **Base URI** to `KIBANA_ENDPOINT_URL`. 5. Set the **Login redirect URI**. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md b/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md index 0fbf9a7e96..d44af1e8b3 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md @@ -152,7 +152,6 @@ idp.metadata.path :::{tip} If you want to pass a file path, then review the following: - * File path settings are resolved relative to the {{es}} config directory. {{es}} will automatically monitor this file for changes and will reload the configuration whenever it is updated. * If you're using {{ece}} or {{ech}}, then you must upload the file [as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. * If you're using {{eck}}, then install the file as [custom configuration files](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md index 18f374e827..8eaae3f995 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md @@ -94,16 +94,18 @@ sudo ./metricbeat setup \ ``` 1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} -Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](beats://reference/libbeat/config-file-permissions.md) of the metricbeat.yml. +2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. -You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. + ::::{important} + Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](beats://reference/libbeat/config-file-permissions.md) of the metricbeat.yml. -If this isn’t a production environment and you want a fast-pass with less permissions hassles, then you can disable strict permission checks from the command line by using `--strict.perms=false` when executing Beats (for example, `./metricbeat --strict.perms=false`). + You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. -Depending on your system, you may also find that some commands need to be run as root, by prefixing `sudo` to the command. + If this isn’t a production environment and you want a fast-pass with less permissions hassles, then you can disable strict permission checks from the command line by using `--strict.perms=false` when executing Beats (for example, `./metricbeat --strict.perms=false`). -:::: + Depending on your system, you may also find that some commands need to be run as root, by prefixing `sudo` to the command. + + :::: @@ -182,9 +184,11 @@ sudo ./filebeat setup \ ``` 1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](beats://reference/libbeat/config-file-permissions.md) of the filebeat.yml. -:::: +2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. + + ::::{important} + Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](beats://reference/libbeat/config-file-permissions.md) of the filebeat.yml. + :::: diff --git a/reference/fleet/elastic-agent-inputs-list.md b/reference/fleet/elastic-agent-inputs-list.md index 530afab799..463a0b1ca5 100644 --- a/reference/fleet/elastic-agent-inputs-list.md +++ b/reference/fleet/elastic-agent-inputs-list.md @@ -36,14 +36,14 @@ When you [configure inputs](/reference/fleet/elastic-agent-input-configuration.m | `containerd/metrics` | [beta] Collects cpu, memory and blkio statistics about running containers controlled by containerd runtime. | [Containerd module](beats://reference/metricbeat/metricbeat-module-containerd.md) ({{metricbeat}} docs) | | `docker/metrics` | Fetches metrics from [Docker](https://www.docker.com/) containers. | [Docker module](beats://reference/metricbeat/metricbeat-module-docker.md) ({{metricbeat}} docs) | | `elasticsearch/metrics` | Collects metrics about {{es}}. | [Elasticsearch module](beats://reference/metricbeat/metricbeat-module-elasticsearch.md) ({{metricbeat}} docs) | -| `etcd/metrics` | This module targets Etcd V2 and V3. When using V2, metrics are collected using [Etcd v2 API](https://coreos.com/etcd/docs/latest/v2/api.md). When using V3, metrics are retrieved from the `/metrics`` endpoint as intended for [Etcd v3](https://coreos.com/etcd/docs/latest/metrics.md). | [Etcd module](beats://reference/metricbeat/metricbeat-module-etcd.md) ({{metricbeat}} docs) | +| `etcd/metrics` | This module targets Etcd V2 and V3. When using V2, metrics are collected using [Etcd v2 API](https://coreos.com/etcd/docs/latest/v2/api.md). When using V3, metrics are retrieved from the `/metrics` endpoint as intended for [Etcd v3](https://coreos.com/etcd/docs/latest/metrics.md). | [Etcd module](beats://reference/metricbeat/metricbeat-module-etcd.md) ({{metricbeat}} docs) | | `gcp/metrics` | Periodically fetches monitoring metrics from Google Cloud Platform using [Stackdriver Monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp) for Google Cloud Platform services. | [Google Cloud Platform module](beats://reference/metricbeat/metricbeat-module-gcp.md) ({{metricbeat}} docs) | | `haproxy/metrics` | Collects stats from [HAProxy](http://www.haproxy.org/). It supports collection from TCP sockets, UNIX sockets, or HTTP with or without basic authentication. | [HAProxy module](beats://reference/metricbeat/index.md) ({{metricbeat}} docs) | | `http/metrics` | Used to call arbitrary HTTP endpoints for which a dedicated Metricbeat module is not available. | [HTTP module](beats://reference/metricbeat/metricbeat-module-http.md) ({{metricbeat}} docs) | | `iis/metrics` | Periodically retrieve IIS web server related metrics. | [IIS module](beats://reference/metricbeat/metricbeat-module-iis.md) ({{metricbeat}} docs) | | `jolokia/metrics` | Collects metrics from [Jolokia agents](https://jolokia.org/reference/html/agents.html) running on a target JMX server or dedicated proxy server. | [Jolokia module](beats://reference/metricbeat/metricbeat-module-jolokia.md) ({{metricbeat}} docs) | | `kafka/metrics` | Collects metrics from the [Apache Kafka](https://kafka.apache.org/intro) event streaming platform. | [Kafka module](beats://reference/metricbeat/metricbeat-module-kafka.md) ({{metricbeat}} docs) | -| `kibana/metrics` | Collects metrics about {{Kibana}}. | [{{kib}} module](beats://reference/metricbeat/metricbeat-module-kibana.md) ({{metricbeat}} docs) | +| `kibana/metrics` | Collects metrics about {{kib}}. | [{{kib}} module](beats://reference/metricbeat/metricbeat-module-kibana.md) ({{metricbeat}} docs) | | `kubernetes/metrics` | As one of the main pieces provided for Kubernetes monitoring, this module is capable of fetching metrics from several components. | [Kubernetes module](beats://reference/metricbeat/metricbeat-module-kubernetes.md) ({{metricbeat}} docs) | | `linux/metrics` | [beta] Reports on metrics exclusive to the Linux kernel and GNU/Linux OS. | [Linux module](beats://reference/metricbeat/metricbeat-module-linux.md) ({{metricbeat}} docs) | | `logstash/metrics` | collects metrics about {{ls}}. | [{{ls}} module](beats://reference/metricbeat/metricbeat-module-logstash.md) ({{metricbeat}} docs) | diff --git a/reference/fleet/otel-agent-transform.md b/reference/fleet/otel-agent-transform.md index 5acb2fdfa2..dc4d893cd2 100644 --- a/reference/fleet/otel-agent-transform.md +++ b/reference/fleet/otel-agent-transform.md @@ -34,8 +34,8 @@ You’ll need the following: To change a running standalone {{agent}} to run as an OTel Collector: -1. Create a directory where the OTel Collector can save its state. In this example we use `<{{agent}} install directory>/data/otelcol`. -2. Open the `<{{agent}} install directory>/otel_samples/platformlogs_hostmetrics.yml` file for editing. +1. Create a directory where the OTel Collector can save its state. In this example we use `/data/otelcol`. +2. Open the `/otel_samples/platformlogs_hostmetrics.yml` file for editing. 3. Set environment details to be used by OTel Collector: * **Option 1:** Define environment variables for the {{agent}} service: diff --git a/reference/fleet/upgrade-elastic-agent.md b/reference/fleet/upgrade-elastic-agent.md index ad912f8ae7..248b6c22cb 100644 --- a/reference/fleet/upgrade-elastic-agent.md +++ b/reference/fleet/upgrade-elastic-agent.md @@ -225,13 +225,13 @@ For installation steps refer to [Install {{fleet}}-managed {{agent}}s](/referenc 1. Download the {{agent}} Debian install package for the release that you want to upgrade to: - ```bash + ```bash subs=true curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-amd64.deb ``` 2. Upgrade {{agent}} to the target release: - ```bash + ```bash subs=true sudo dpkg -i elastic-agent-{{stack-version}}-amd64.deb ``` @@ -242,13 +242,13 @@ For installation steps refer to [Install {{fleet}}-managed {{agent}}s](/referenc 1. Download the {{agent}} RPM install package for the release that you want to upgrade to: - ```bash + ```bash subs=true curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-x86_64.rpm ``` 2. Upgrade {{agent}} to the target release: - ```bash + ```bash subs=true sudo rpm -U elastic-agent-{{stack-version}}-x86_64.rpm ``` diff --git a/solutions/observability/applications/tutorial-monitor-java-application.md b/solutions/observability/applications/tutorial-monitor-java-application.md index 4724463958..2b8766214a 100644 --- a/solutions/observability/applications/tutorial-monitor-java-application.md +++ b/solutions/observability/applications/tutorial-monitor-java-application.md @@ -1476,7 +1476,7 @@ java -jar /tmp/apm-agent-attach-1.17.0-standalone.jar --pid 30730 \ This above message will return something like this: ```text -2020-07-10 15:04:48.144 INFO Attaching the Elastic {{apm-agent}} to 30730 +2020-07-10 15:04:48.144 INFO Attaching the Elastic APM agent to 30730 2020-07-10 15:04:49.649 INFO Done ``` diff --git a/solutions/observability/logs/stream-any-log-file.md b/solutions/observability/logs/stream-any-log-file.md index e4f770b39f..467f10ebea 100644 --- a/solutions/observability/logs/stream-any-log-file.md +++ b/solutions/observability/logs/stream-any-log-file.md @@ -93,7 +93,7 @@ Expand-Archive .\elastic-agent-{{stack-version}}-windows-x86_64.zip ::::::{tab-item} DEB -:::tip +:::{tip} To simplify upgrading to future versions of Elastic Agent, we recommended that you use the tarball distribution instead of the RPM distribution. You can install Elastic Agent in an unprivileged mode that does not require root privileges. ::: @@ -106,7 +106,7 @@ sudo dpkg -i elastic-agent-{{stack-version}}-amd64.deb ::::::{tab-item} RPM -:::tip +:::{tip} To simplify upgrading to future versions of Elastic Agent, we recommended that you use the tarball distribution instead of the RPM distribution. You can install Elastic Agent in an unprivileged mode that does not require root privileges. ::: diff --git a/solutions/search/ranking/learning-to-rank-model-training.md b/solutions/search/ranking/learning-to-rank-model-training.md index 19a6c54598..881a610c93 100644 --- a/solutions/search/ranking/learning-to-rank-model-training.md +++ b/solutions/search/ranking/learning-to-rank-model-training.md @@ -102,7 +102,7 @@ Building your dataset is a critical step in the training process. This involves ```python from eland.ml.ltr import FeatureLogger -# Create a feature logger that will be used to query {{es}} to retrieve the features: +# Create a feature logger that will be used to query Elasticsearch to retrieve the features: feature_logger = FeatureLogger(es_client, MOVIE_INDEX, ltr_config) ``` diff --git a/troubleshoot/ingest/fleet/common-problems.md b/troubleshoot/ingest/fleet/common-problems.md index 2318a06237..64b93b0b87 100644 --- a/troubleshoot/ingest/fleet/common-problems.md +++ b/troubleshoot/ingest/fleet/common-problems.md @@ -252,7 +252,7 @@ You will also need to set `ssl.verification_mode: none` in the Output settings i To enroll in {{fleet}}, {{agent}} must connect to the {{fleet-server}} instance. If the agent is unable to connect, you see the following failure: ```txt -fail to enroll: fail to execute request to {{fleet-server}}:Post http://fleet-server:8220/api/fleet/agents/enroll?: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) +fail to enroll: fail to execute request to Fleet Server:Post http://fleet-server:8220/api/fleet/agents/enroll?: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ``` Here are several steps to help you troubleshoot the problem. diff --git a/troubleshoot/kibana/capturing-diagnostics.md b/troubleshoot/kibana/capturing-diagnostics.md index 1208d8a2ec..e5fda597ad 100644 --- a/troubleshoot/kibana/capturing-diagnostics.md +++ b/troubleshoot/kibana/capturing-diagnostics.md @@ -112,6 +112,6 @@ The following are common errors that you might encounter when running the diagno The provided user has insufficient admin permissions to run the diagnostic tool. Use another user, or grant the user `role:superuser` privileges. -* `{{kib}} Server is not Ready yet` +* `Kibana Server is not Ready yet` - This indicates issues with {{kib}}'s dependencies blocking full start-up. To investigate, check [Error: {{kib}}} server is not ready yet](/troubleshoot/kibana/error-server-not-ready.md). + This indicates issues with {{kib}}'s dependencies blocking full start-up. To investigate, check [Error: {{kib}} server is not ready yet](/troubleshoot/kibana/error-server-not-ready.md). diff --git a/troubleshoot/kibana/task-manager.md b/troubleshoot/kibana/task-manager.md index 69249af45b..19ac85f57e 100644 --- a/troubleshoot/kibana/task-manager.md +++ b/troubleshoot/kibana/task-manager.md @@ -924,7 +924,7 @@ For details on scaling Task Manager, see [Scaling guidance](../../deploy-manage/ Tasks are not running, and the server logs contain the following error message: ```txt -[warning][plugins][taskManager] Task Manager cannot operate when inline scripts are disabled in {{es}} +[warning][plugins][taskManager] Task Manager cannot operate when inline scripts are disabled in Elasticsearch ``` **Solution**: