From 8c9eb27f8619b24e9ad7b4afc375e5f468d83f50 Mon Sep 17 00:00:00 2001 From: Florent Le Borgne Date: Fri, 3 Oct 2025 17:42:27 +0200 Subject: [PATCH 1/7] data management menu changes for explore-analyze --- .../machine-learning-in-kibana/inference-processing.md | 2 +- explore-analyze/machine-learning/nlp/ml-nlp-inference.md | 4 ++-- explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md | 2 +- .../nlp/ml-nlp-text-emb-vector-search-example.md | 2 +- .../machine-learning/setting-up-machine-learning.md | 4 ++-- explore-analyze/transforms/ecommerce-transforms.md | 2 +- explore-analyze/transforms/transform-checkpoints.md | 2 +- explore-analyze/transforms/transform-limitations.md | 2 +- 8 files changed, 10 insertions(+), 10 deletions(-) diff --git a/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md b/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md index d803bc8b9d..51ac0e20ca 100644 --- a/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md +++ b/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md @@ -106,7 +106,7 @@ Here, you’ll be able to: Inference processors added to your index-specific ML {{infer}} pipelines are normal Elasticsearch pipelines. Once created, each processor will have options to **View in Stack Management** and **Delete Pipeline**. Deleting an {{infer}} processor from within the **Content** UI deletes the pipeline and also removes its reference from your index-specific ML {{infer}} pipeline. -These pipelines can also be viewed, edited, and deleted in Kibana via **Stack Management → Ingest Pipelines**, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them. +These pipelines can also be viewed, edited, and deleted in Kibana from the **Ingest Pipelines** management page, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them. ## Test your ML {{infer}} pipeline [ingest-pipeline-search-inference-test-inference-pipeline] diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-inference.md b/explore-analyze/machine-learning/nlp/ml-nlp-inference.md index 2f6a4d1997..42e9a0434a 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-inference.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-inference.md @@ -19,10 +19,10 @@ After you [deploy a trained model in your cluster](ml-nlp-deploy-models.md), you ## Add an {{infer}} processor to an ingest pipeline [ml-nlp-inference-processor] -In {{kib}}, you can create and edit pipelines in **{{stack-manage-app}}** > **Ingest Pipelines**. To open **Ingest Pipelines**, find **{{stack-manage-app}}** in the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md). +In {{kib}}, you can create and edit pipelines from the **Ingest Pipelines** management page. You can find this page in the main menu or using the [global search field](../../find-and-organize/find-apps-and-objects.md). :::{image} /explore-analyze/images/machine-learning-ml-nlp-pipeline-lang.png -:alt: Creating a pipeline in the Stack Management app +:alt: Creating a pipeline :screenshot: ::: diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md index 66f3ab67fa..b939cbe5ff 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md @@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.", You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file. -Now create an ingest pipeline either in the [Stack management UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API: +Now create an ingest pipeline either from the [Ingest pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) page in {{kib}} or by using the API: ```js PUT _ingest/pipeline/ner diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md index 4f1b7953a8..3e9d6c9909 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md @@ -116,7 +116,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa Process the initial data with an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md). It adds an embedding for each passage. For this, create a text embedding ingest pipeline and then reindex the initial data with this pipeline. -Now create an ingest pipeline either in the [{{stack-manage-app}} UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API: +Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) page in {{kib}} or by using the API: ```js PUT _ingest/pipeline/text-embeddings diff --git a/explore-analyze/machine-learning/setting-up-machine-learning.md b/explore-analyze/machine-learning/setting-up-machine-learning.md index 1e07e7f3e9..f3f9dd23de 100644 --- a/explore-analyze/machine-learning/setting-up-machine-learning.md +++ b/explore-analyze/machine-learning/setting-up-machine-learning.md @@ -37,7 +37,7 @@ Assigning security privileges affects how users access {{ml-features}}. Consider You can configure these privileges -* under **Security**. To open Security, find **{{stack-manage-app}}** in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md). +* under **Roles** or **Spaces** management pages. Find these pages in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md). * via the respective {{es}} security APIs. ### {{es}} API user [es-security-privileges] @@ -68,7 +68,7 @@ Granting `All` or `Read` {{kib}} feature privilege for {{ml-app}} will also gran #### Feature visibility in Spaces [kib-visibility-spaces] -In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to **{{stack-manage-app}}** > **{{kib}}** > **Spaces** or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate **Spaces** directly. +In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, navigate to the **Spaces** management page or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate it directly. :::{image} /explore-analyze/images/machine-learning-spaces.jpg :alt: Manage spaces in {{kib}} diff --git a/explore-analyze/transforms/ecommerce-transforms.md b/explore-analyze/transforms/ecommerce-transforms.md index fc1c9001b7..0566245497 100644 --- a/explore-analyze/transforms/ecommerce-transforms.md +++ b/explore-analyze/transforms/ecommerce-transforms.md @@ -23,7 +23,7 @@ products: For example, you might want to group the data by product ID and calculate the total number of sales for each product and its average price. Alternatively, you might want to look at the behavior of individual customers and calculate how much each customer spent in total and how many different categories of products they purchased. Or you might want to take the currencies or geographies into consideration. What are the most interesting ways you can transform and interpret this data? - Go to **Management** > **Stack Management** > **Data** > **Transforms** in {{kib}} and use the wizard to create a {{transform}}: + Go to the **Transforms** management page in {{kib}} and use the wizard to create a {{transform}}: :::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot1.png :alt: Creating a simple {{transform}} in {{kib}} :screenshot: diff --git a/explore-analyze/transforms/transform-checkpoints.md b/explore-analyze/transforms/transform-checkpoints.md index ad12fa4c2a..d4cc48947f 100644 --- a/explore-analyze/transforms/transform-checkpoints.md +++ b/explore-analyze/transforms/transform-checkpoints.md @@ -45,7 +45,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}. -If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}} under **Stack Management > Ingest Pipelines**. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp. +If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp. ```console PUT _ingest/pipeline/set_ingest_time diff --git a/explore-analyze/transforms/transform-limitations.md b/explore-analyze/transforms/transform-limitations.md index a94cf6ec2b..a7e3b9f12e 100644 --- a/explore-analyze/transforms/transform-limitations.md +++ b/explore-analyze/transforms/transform-limitations.md @@ -132,7 +132,7 @@ When running a large number of SLO {{transforms}}, two types of limitations can #### {{transforms-cap}} can return inaccurate errors that suggest deletion [transforms-inaccurate-errors] -The {{transforms-cap}} API and the {{transforms-cap}} page in {{kib}} (**Stack Management** > **{{transforms-cap}})** may display misleading error messages for {{transforms}} created by service level objectives (SLOs). +The {{transforms-cap}} API and the {{transforms-cap}} management page in {{kib}} may display misleading error messages for {{transforms}} created by service level objectives (SLOs). The message typically reads: From 52b2fcba8b88890216af9c19f83c7239e3c34c26 Mon Sep 17 00:00:00 2001 From: Florent Le Borgne Date: Fri, 3 Oct 2025 17:50:34 +0200 Subject: [PATCH 2/7] data management menu changes for troubleshoot --- troubleshoot/elasticsearch/decrease-disk-usage-data-node.md | 2 +- .../elasticsearch/diagnosing-corrupted-repositories.md | 4 ++-- troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md | 2 +- troubleshoot/observability/apm/common-problems.md | 2 +- troubleshoot/security/elastic-defend.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md index 0acd8678d2..b2916436ff 100644 --- a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md +++ b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md @@ -34,7 +34,7 @@ Reducing the replicas of an index can potentially reduce search throughput and d If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). :::: -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Stack Management > Index Management**. +3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to the **Index Management** page. You can find this page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 4. In the list of all your indices, click the `Replicas` column twice to sort the indices based on their number of replicas starting with the one that has the most. Go through the indices and pick one by one the index with the least importance and higher number of replicas. ::::{warning} diff --git a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md index 4bfe0d55ba..a9e4f2fc97 100644 --- a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md +++ b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md @@ -33,7 +33,7 @@ First mark the repository as read-only on the secondary deployments: If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). :::: -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Stack Management > Snapshot and Restore > Repositories**. +3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Snapshot and Restore > Repositories**. You can find the **Snapshot and Restore** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). :::{image} /troubleshoot/images/elasticsearch-reference-repositories.png :alt: {{kib}} Console @@ -46,7 +46,7 @@ At this point, it’s only the primary (current) deployment that has the reposit Note that we’re now configuring the primary (current) deployment. -1. Open the primary deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Stack Management > Snapshot and Restore > Repositories**. +1. Open the primary deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Snapshot and Restore > Repositories**. You can find the **Snapshot and Restore** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). :::{image} /troubleshoot/images/elasticsearch-reference-repositories.png :alt: {{kib}} Console diff --git a/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md b/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md index 4b26e84778..d2edca3b1c 100644 --- a/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md +++ b/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md @@ -15,7 +15,7 @@ products: {{es}} [Ingest Pipelines](https://www.elastic.co/docs/manage-data/ingest/transform-enrich/ingest-pipelines) allow you to transform data during ingest. Per [write model](https://www.elastic.co/docs/deploy-manage/distributed-architecture/reading-and-writing-documents#basic-write-model), they run from `ingest` [node roles](https://www.elastic.co/docs/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles) under the `write` [thread pool](https://www.elastic.co/docs/reference/elasticsearch/configuration-reference/thread-pool-settings). -You can edit ingest pipelines under {{kib}}'s **Stack Management > Ingest Pipelines** or from {{es}}'s [Modify Pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline). They store under {{es}}'s [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state) as accessed from [List Pipelines](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-get-pipeline). +You can edit ingest pipelines in {{kib}}'s **Ingest Pipelines** page or from {{es}}'s [Modify Pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline). They store under {{es}}'s [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state) as accessed from [List Pipelines](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-get-pipeline). Ingest pipelines can be [Simulated](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) during testing, but after go-live are triggered during event ingest from diff --git a/troubleshoot/observability/apm/common-problems.md b/troubleshoot/observability/apm/common-problems.md index 470d480a3b..b50b0e8697 100644 --- a/troubleshoot/observability/apm/common-problems.md +++ b/troubleshoot/observability/apm/common-problems.md @@ -252,7 +252,7 @@ In Elasticsearch, index templates are used to define settings and mappings that As an example, some APM agents store cookie values in `http.request.cookies`. Since `http.request` has disabled dynamic indexing, and `http.request.cookies` is not declared in a custom mapping, the values in `http.request.cookies` are not indexed and thus not searchable. -**Ensure an APM data view exists** As a first step, you should ensure the correct data view exists. In {{kib}}, go to **Stack Management** > **Data views**. You should see the APM data view—the default is `traces-apm*,apm-*,logs-apm*,apm-*,metrics-apm*,apm-*`. If you don’t, the data view doesn’t exist. To fix this, navigate to the Applications UI in {{kib}} and select **Add data**. In the APM tutorial, click **Load Kibana objects** to create the APM data view. +**Ensure an APM data view exists** As a first step, you should ensure the correct data view exists. In {{kib}}, go to **Stack Management** > **Data Views**. You should see the APM data view—the default is `traces-apm*,apm-*,logs-apm*,apm-*,metrics-apm*,apm-*`. If you don’t, the data view doesn’t exist. To fix this, navigate to the Applications UI in {{kib}} and select **Add data**. In the APM tutorial, click **Load Kibana objects** to create the APM data view. **Ensure a field is searchable** There are two things you can do to if you’d like to ensure a field is searchable: diff --git a/troubleshoot/security/elastic-defend.md b/troubleshoot/security/elastic-defend.md index 284e340e1b..7af8e95ecf 100644 --- a/troubleshoot/security/elastic-defend.md +++ b/troubleshoot/security/elastic-defend.md @@ -88,7 +88,7 @@ If you encounter a `“Required transform failed”` notice on the Endpoints pag To restart a transform that’s not running: -1. Go to **Kibana** → **Stack Management** → **Data** → **Transforms**. +1. Go to {{kib}}'s **Transforms** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Enter `endpoint.metadata` in the search box to find the transforms for {{elastic-defend}}. 3. Click the **Actions** menu (**…**) and do one of the following for each transform, depending on the value in the **Status** column: From 5de1f115d71f542c61efd653817c7a4729d48e13 Mon Sep 17 00:00:00 2001 From: Florent Le Borgne Date: Fri, 3 Oct 2025 18:06:51 +0200 Subject: [PATCH 3/7] data management menu changes for reference --- reference/fleet/data-streams-pipeline-tutorial.md | 13 +++++++------ reference/fleet/data-streams-scenario1.md | 4 ++-- reference/fleet/data-streams-scenario2.md | 4 ++-- reference/fleet/data-streams-scenario3.md | 8 ++++---- reference/fleet/data-streams-scenario4.md | 2 +- reference/fleet/data-streams.md | 4 ++-- .../fleet/migrate-from-beats-to-elastic-agent.md | 2 +- 7 files changed, 19 insertions(+), 18 deletions(-) diff --git a/reference/fleet/data-streams-pipeline-tutorial.md b/reference/fleet/data-streams-pipeline-tutorial.md index 053a6671a0..bed4c41504 100644 --- a/reference/fleet/data-streams-pipeline-tutorial.md +++ b/reference/fleet/data-streams-pipeline-tutorial.md @@ -19,9 +19,10 @@ This tutorial explains how to add a custom ingest pipeline to an Elastic Integra Create a custom ingest pipeline that will be called by the default integration pipeline. In this tutorial, we’ll create a pipeline that adds a new field to our documents. -1. In {{kib}}, navigate to **Stack Management** → **Ingest Pipelines** → **Create pipeline** → **New pipeline**. -2. Name your pipeline. We’ll call this one, `add_field`. -3. Select **Add a processor**. Fill out the following information: +1. In {{kib}}, go to the **Ingest Pipelines** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +1. **Create pipeline** → **New pipeline**. +1. Name your pipeline. We’ll call this one, `add_field`. +1. Select **Add a processor**. Fill out the following information: * Processor: "Set" * Field: `test` @@ -29,8 +30,8 @@ Create a custom ingest pipeline that will be called by the default integration p The [Set processor](elasticsearch://reference/enrich-processor/set-processor.md) sets a document field and associates it with the specified value. -4. Click **Add**. -5. Click **Create pipeline**. +1. Click **Add**. +1. Click **Create pipeline**. ## Step 2: Apply your ingest pipeline [data-streams-pipeline-two] @@ -55,7 +56,7 @@ Most integrations write to multiple data streams. You’ll need to add the custo 1. Find the first data stream you wish to edit and select **Change defaults**. For this tutorial, find the data stream configuration titled, **Collect metrics from System instances**. 2. Scroll to **System CPU metrics** and under **Advanced options** select **Add custom pipeline**. - This will take you to the **Create pipeline** workflow in **Stack management**. + This will take you to the **Create pipeline** workflow. diff --git a/reference/fleet/data-streams-scenario1.md b/reference/fleet/data-streams-scenario1.md index 3caa3f741b..f006d59aa7 100644 --- a/reference/fleet/data-streams-scenario1.md +++ b/reference/fleet/data-streams-scenario1.md @@ -22,7 +22,7 @@ This tutorial explains how to apply a custom index lifecycle policy to all of th ## Step 1: Create an index lifecycle policy [data-streams-scenario1-step1] -1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Click **Create policy**. Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**. @@ -32,7 +32,7 @@ Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: -1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Select **Index Templates**. 3. Search for `system` to see all index templates associated with the System integration. 4. Select any `logs-*` index template to view the associated component templates. For example, you can select the `logs-system.application` index template. diff --git a/reference/fleet/data-streams-scenario2.md b/reference/fleet/data-streams-scenario2.md index c8a1cfe201..429d78aa54 100644 --- a/reference/fleet/data-streams-scenario2.md +++ b/reference/fleet/data-streams-scenario2.md @@ -17,7 +17,7 @@ This tutorial explains how to apply a custom index lifecycle policy to the `logs ## Step 1: Create an index lifecycle policy [data-streams-scenario2-step1] -1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Click **Create policy**. Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**. @@ -27,7 +27,7 @@ Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: -1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +1. Go to the **Index Management** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Select **Index Templates**. 3. Search for `system` to see all index templates associated with the System integration. 4. Select the index template that matches the data stream for which you want to set up an ILM policy. For this example, you can select the `logs-system.auth` index template. diff --git a/reference/fleet/data-streams-scenario3.md b/reference/fleet/data-streams-scenario3.md index 499cc220e7..9ac8ee874c 100644 --- a/reference/fleet/data-streams-scenario3.md +++ b/reference/fleet/data-streams-scenario3.md @@ -26,7 +26,7 @@ In this scenario, you have {{agent}}s collecting system metrics with the System The **Data Streams** view in {{kib}} shows you the data streams, index templates, and {{ilm-init}} policies associated with a given integration. -1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Data Streams**. +1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Data Streams** tab. 2. Search for `system` to see all data streams associated with the System integration. 3. Select the `metrics-system.network-{{namespace}}` data stream to view its associated index template and {{ilm-init}} policy. As you can see, the data stream follows the [Data stream naming scheme](/reference/fleet/data-streams.md#data-streams-naming-scheme) and starts with its type, `metrics-`. @@ -51,7 +51,7 @@ For example, to create custom index settings for the `system.network` data strea metrics-system.network-production@custom ``` -1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates** +1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Component Templates** tab. 2. Click **Create component template**. 3. Use the template above to set the name—in this case, `metrics-system.network-production@custom`. Click **Next**. 4. Under **Index settings**, set the {{ilm-init}} policy name under the `lifecycle.name` key: @@ -86,7 +86,7 @@ Now that you’ve created a component template, you need to create an index temp :::: -1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Index Templates**. +1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Index Templates** tab. 2. Find the index template you want to clone. The index template will have the `` and `` in its name, but not the ``. In this case, it’s `metrics-system.network`. 3. Select **Actions** > **Clone**. 4. Set the name of the new index template to `metrics-system.network-production`. @@ -144,7 +144,7 @@ If you cloned an index template to customize the data retention policy on an {{e To update the cloned index template: -1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Index Templates**. +1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Index Templates** tab. 2. Find the index template you cloned. The index template will have the `` and `` in its name. 3. Select **Manage** > **Edit**. 4. Select **(2) Component templates** diff --git a/reference/fleet/data-streams-scenario4.md b/reference/fleet/data-streams-scenario4.md index c78c12db2c..607bd78e92 100644 --- a/reference/fleet/data-streams-scenario4.md +++ b/reference/fleet/data-streams-scenario4.md @@ -16,7 +16,7 @@ If you’ve created a custom integration package, you can apply a single ILM pol ## Step 1: Define the ILM policy [data-streams-scenario4-step1] -1. In {{kib}}, go to **Stack Management** and select **Index Lifecycle Policies**. You can also use the [global search field](/get-started/the-stack.md#kibana-navigation-search). +1. In {{kib}}, go to the **Index Lifecycle Policies** management page. You can also use the [global search field](/get-started/the-stack.md#kibana-navigation-search). 2. Click **Create policy**. 3. Name the policy, configure it as needed, and click **Save policy**. diff --git a/reference/fleet/data-streams.md b/reference/fleet/data-streams.md index f3d89cc1c7..247dda3343 100644 --- a/reference/fleet/data-streams.md +++ b/reference/fleet/data-streams.md @@ -98,10 +98,10 @@ The `@custom` component template specific to a datastream has higher precedence You can edit a `@custom` component template to customize your {{es}} indices: -1. Open {{kib}} and navigate to to **{{stack-manage-app}}** > **Index Management** > **Data Streams**. +1. Open {{kib}} and go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then open the **Data Streams** tab. 2. Find and click the name of the integration data stream, such as `logs-cisco_ise.log-default`. 3. Click the index template link for the data stream to see the list of associated component templates. -4. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates**. +4. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Component Templates** tab. 5. Search for the name of the data stream’s custom component template and click the edit icon. 6. Add any custom index settings, metadata, or mappings. For example, you may want to: diff --git a/reference/fleet/migrate-from-beats-to-elastic-agent.md b/reference/fleet/migrate-from-beats-to-elastic-agent.md index 72bc0a0f1f..850752f9d9 100644 --- a/reference/fleet/migrate-from-beats-to-elastic-agent.md +++ b/reference/fleet/migrate-from-beats-to-elastic-agent.md @@ -308,7 +308,7 @@ When you migrate from {{beats}} to {{agent}}, you have a couple of options for m If you have existing index lifecycle policies for {{beats}}, it’s highly recommended that you modify the lifecycle policies for {{agent}} to match your previous policy. To do this: - 1. In {{kib}}, go to **{{stack-manage-app}} > Index Lifecycle Policies** and search for a {{beats}} policy, for example, **filebeat**. Under **Linked indices**, notice you can view indices linked to the policy. Click the policy name to see the settings. + 1. In {{kib}}, go to the **Index Lifecycle Policies** management page and search for a {{beats}} policy, for example, **filebeat**. Under **Linked indices**, notice you can view indices linked to the policy. Click the policy name to see the settings. 2. Click the **logs** policy and, if necessary, edit the settings to match the old policy. 3. Under **Index Lifecycle Policies**, search for another {{beats}} policy, for example, **metricbeat**. 4. Click the **metrics** policy and edit the settings to match the old policy. From bc60bf760c8ec4c3340f1fa8fde44bd8bd797b8a Mon Sep 17 00:00:00 2001 From: Florent Le Borgne Date: Fri, 3 Oct 2025 18:15:44 +0200 Subject: [PATCH 4/7] small edits --- explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md | 2 +- .../machine-learning/setting-up-machine-learning.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md index b939cbe5ff..9738a6a611 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md @@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.", You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file. -Now create an ingest pipeline either from the [Ingest pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) page in {{kib}} or by using the API: +Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) page in {{kib}} or by using the API: ```js PUT _ingest/pipeline/ner diff --git a/explore-analyze/machine-learning/setting-up-machine-learning.md b/explore-analyze/machine-learning/setting-up-machine-learning.md index f3f9dd23de..30d8089264 100644 --- a/explore-analyze/machine-learning/setting-up-machine-learning.md +++ b/explore-analyze/machine-learning/setting-up-machine-learning.md @@ -37,7 +37,7 @@ Assigning security privileges affects how users access {{ml-features}}. Consider You can configure these privileges -* under **Roles** or **Spaces** management pages. Find these pages in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md). +* under the **Roles** and **Spaces** management pages. Find these pages in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md). * via the respective {{es}} security APIs. ### {{es}} API user [es-security-privileges] @@ -68,7 +68,7 @@ Granting `All` or `Read` {{kib}} feature privilege for {{ml-app}} will also gran #### Feature visibility in Spaces [kib-visibility-spaces] -In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, navigate to the **Spaces** management page or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate it directly. +In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to the **Spaces** management page using the navigation menu or the [global search field](../find-and-organize/find-apps-and-objects.md). :::{image} /explore-analyze/images/machine-learning-spaces.jpg :alt: Manage spaces in {{kib}} From 06f6c9dbd8436755e956256b2f2807da4cbf227d Mon Sep 17 00:00:00 2001 From: florent-leborgne Date: Mon, 6 Oct 2025 10:29:46 +0200 Subject: [PATCH 5/7] Apply suggestion from @nastasha-solomon Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> --- explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md index 9738a6a611..d4d6b092d6 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md @@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.", You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file. -Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) page in {{kib}} or by using the API: +Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API: ```js PUT _ingest/pipeline/ner From 5a64b5fb5831895c9b85316598d5473057f52d4a Mon Sep 17 00:00:00 2001 From: florent-leborgne Date: Mon, 6 Oct 2025 10:36:55 +0200 Subject: [PATCH 6/7] Apply suggestions from code review Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- .../nlp/ml-nlp-text-emb-vector-search-example.md | 2 +- explore-analyze/transforms/ecommerce-transforms.md | 2 +- troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md | 2 +- troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md index 3e9d6c9909..fa329929cd 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md @@ -116,7 +116,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa Process the initial data with an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md). It adds an embedding for each passage. For this, create a text embedding ingest pipeline and then reindex the initial data with this pipeline. -Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) page in {{kib}} or by using the API: +Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API: ```js PUT _ingest/pipeline/text-embeddings diff --git a/explore-analyze/transforms/ecommerce-transforms.md b/explore-analyze/transforms/ecommerce-transforms.md index 0566245497..76653d4431 100644 --- a/explore-analyze/transforms/ecommerce-transforms.md +++ b/explore-analyze/transforms/ecommerce-transforms.md @@ -23,7 +23,7 @@ products: For example, you might want to group the data by product ID and calculate the total number of sales for each product and its average price. Alternatively, you might want to look at the behavior of individual customers and calculate how much each customer spent in total and how many different categories of products they purchased. Or you might want to take the currencies or geographies into consideration. What are the most interesting ways you can transform and interpret this data? - Go to the **Transforms** management page in {{kib}} and use the wizard to create a {{transform}}: + Go to the **Transforms** management page in {{kib}} using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then use the wizard to create a {{transform}}: :::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot1.png :alt: Creating a simple {{transform}} in {{kib}} :screenshot: diff --git a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md index a9e4f2fc97..b4cf270f78 100644 --- a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md +++ b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md @@ -33,7 +33,7 @@ First mark the repository as read-only on the secondary deployments: If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md). :::: -3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Snapshot and Restore > Repositories**. You can find the **Snapshot and Restore** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +3. Go to **Snapshot and Restore > Repositories**. You can find the **Snapshot and Restore** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). :::{image} /troubleshoot/images/elasticsearch-reference-repositories.png :alt: {{kib}} Console diff --git a/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md b/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md index d2edca3b1c..d2f5f15ed0 100644 --- a/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md +++ b/troubleshoot/elasticsearch/troubleshoot-ingest-pipelines.md @@ -15,7 +15,7 @@ products: {{es}} [Ingest Pipelines](https://www.elastic.co/docs/manage-data/ingest/transform-enrich/ingest-pipelines) allow you to transform data during ingest. Per [write model](https://www.elastic.co/docs/deploy-manage/distributed-architecture/reading-and-writing-documents#basic-write-model), they run from `ingest` [node roles](https://www.elastic.co/docs/deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles) under the `write` [thread pool](https://www.elastic.co/docs/reference/elasticsearch/configuration-reference/thread-pool-settings). -You can edit ingest pipelines in {{kib}}'s **Ingest Pipelines** page or from {{es}}'s [Modify Pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline). They store under {{es}}'s [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state) as accessed from [List Pipelines](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-get-pipeline). +You can edit ingest pipelines in {{kib}}'s **Ingest Pipelines** management page or from {{es}}'s [Modify Pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline). They store under {{es}}'s [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state) as accessed from [List Pipelines](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-get-pipeline). Ingest pipelines can be [Simulated](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) during testing, but after go-live are triggered during event ingest from From e22deeeaf0049ad053eaf9ec64cddea834d94e37 Mon Sep 17 00:00:00 2001 From: florent-leborgne Date: Mon, 6 Oct 2025 21:35:37 +0200 Subject: [PATCH 7/7] Update reference/fleet/data-streams-scenario2.md Co-authored-by: Colleen McGinnis --- reference/fleet/data-streams-scenario2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/reference/fleet/data-streams-scenario2.md b/reference/fleet/data-streams-scenario2.md index 429d78aa54..dba76329a2 100644 --- a/reference/fleet/data-streams-scenario2.md +++ b/reference/fleet/data-streams-scenario2.md @@ -27,7 +27,7 @@ Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices: -1. Go to the **Index Management** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Select **Index Templates**. 3. Search for `system` to see all index templates associated with the System integration. 4. Select the index template that matches the data stream for which you want to set up an ILM policy. For this example, you can select the `logs-system.auth` index template.