Skip to content
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ Here, you’ll be able to:

Inference processors added to your index-specific ML {{infer}} pipelines are normal Elasticsearch pipelines. Once created, each processor will have options to **View in Stack Management** and **Delete Pipeline**. Deleting an {{infer}} processor from within the **Content** UI deletes the pipeline and also removes its reference from your index-specific ML {{infer}} pipeline.

These pipelines can also be viewed, edited, and deleted in Kibana via **Stack Management → Ingest Pipelines**, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
These pipelines can also be viewed, edited, and deleted in Kibana from the **Ingest Pipelines** management page, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.

## Test your ML {{infer}} pipeline [ingest-pipeline-search-inference-test-inference-pipeline]

Expand Down
4 changes: 2 additions & 2 deletions explore-analyze/machine-learning/nlp/ml-nlp-inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ After you [deploy a trained model in your cluster](ml-nlp-deploy-models.md), you

## Add an {{infer}} processor to an ingest pipeline [ml-nlp-inference-processor]

In {{kib}}, you can create and edit pipelines in **{{stack-manage-app}}** > **Ingest Pipelines**. To open **Ingest Pipelines**, find **{{stack-manage-app}}** in the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md).
In {{kib}}, you can create and edit pipelines from the **Ingest Pipelines** management page. You can find this page in the main menu or using the [global search field](../../find-and-organize/find-apps-and-objects.md).

:::{image} /explore-analyze/images/machine-learning-ml-nlp-pipeline-lang.png
:alt: Creating a pipeline in the Stack Management app
:alt: Creating a pipeline
:screenshot:
:::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.",

You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file.

Now create an ingest pipeline either in the [Stack management UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:

```js
PUT _ingest/pipeline/ner
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa

Process the initial data with an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md). It adds an embedding for each passage. For this, create a text embedding ingest pipeline and then reindex the initial data with this pipeline.

Now create an ingest pipeline either in the [{{stack-manage-app}} UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:

```js
PUT _ingest/pipeline/text-embeddings
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Assigning security privileges affects how users access {{ml-features}}. Consider

You can configure these privileges

* under **Security**. To open Security, find **{{stack-manage-app}}** in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md).
* under the **Roles** and **Spaces** management pages. Find these pages in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md).
* via the respective {{es}} security APIs.

### {{es}} API user [es-security-privileges]
Expand Down Expand Up @@ -68,7 +68,7 @@ Granting `All` or `Read` {{kib}} feature privilege for {{ml-app}} will also gran

#### Feature visibility in Spaces [kib-visibility-spaces]

In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to **{{stack-manage-app}}** > **{{kib}}** > **Spaces** or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate **Spaces** directly.
In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to the **Spaces** management page using the navigation menu or the [global search field](../find-and-organize/find-apps-and-objects.md).

:::{image} /explore-analyze/images/machine-learning-spaces.jpg
:alt: Manage spaces in {{kib}}
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/transforms/ecommerce-transforms.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ products:

For example, you might want to group the data by product ID and calculate the total number of sales for each product and its average price. Alternatively, you might want to look at the behavior of individual customers and calculate how much each customer spent in total and how many different categories of products they purchased. Or you might want to take the currencies or geographies into consideration. What are the most interesting ways you can transform and interpret this data?

Go to **Management** > **Stack Management** > **Data** > **Transforms** in {{kib}} and use the wizard to create a {{transform}}:
Go to the **Transforms** management page in {{kib}} using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then use the wizard to create a {{transform}}:
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot1.png
:alt: Creating a simple {{transform}} in {{kib}}
:screenshot:
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/transforms/transform-checkpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans

In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.

If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}} under **Stack Management > Ingest Pipelines**. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.

```console
PUT _ingest/pipeline/set_ingest_time
Expand Down
2 changes: 1 addition & 1 deletion explore-analyze/transforms/transform-limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ When running a large number of SLO {{transforms}}, two types of limitations can

#### {{transforms-cap}} can return inaccurate errors that suggest deletion [transforms-inaccurate-errors]

The {{transforms-cap}} API and the {{transforms-cap}} page in {{kib}} (**Stack Management** > **{{transforms-cap}})** may display misleading error messages for {{transforms}} created by service level objectives (SLOs).
The {{transforms-cap}} API and the {{transforms-cap}} management page in {{kib}} may display misleading error messages for {{transforms}} created by service level objectives (SLOs).

The message typically reads:

Expand Down
13 changes: 7 additions & 6 deletions reference/fleet/data-streams-pipeline-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,19 @@ This tutorial explains how to add a custom ingest pipeline to an Elastic Integra

Create a custom ingest pipeline that will be called by the default integration pipeline. In this tutorial, we’ll create a pipeline that adds a new field to our documents.

1. In {{kib}}, navigate to **Stack Management** → **Ingest Pipelines** → **Create pipeline** → **New pipeline**.
2. Name your pipeline. We’ll call this one, `add_field`.
3. Select **Add a processor**. Fill out the following information:
1. In {{kib}}, go to the **Ingest Pipelines** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. **Create pipeline** → **New pipeline**.
1. Name your pipeline. We’ll call this one, `add_field`.
1. Select **Add a processor**. Fill out the following information:

* Processor: "Set"
* Field: `test`
* Value: `true`

The [Set processor](elasticsearch://reference/enrich-processor/set-processor.md) sets a document field and associates it with the specified value.

4. Click **Add**.
5. Click **Create pipeline**.
1. Click **Add**.
1. Click **Create pipeline**.


## Step 2: Apply your ingest pipeline [data-streams-pipeline-two]
Expand All @@ -55,7 +56,7 @@ Most integrations write to multiple data streams. You’ll need to add the custo
1. Find the first data stream you wish to edit and select **Change defaults**. For this tutorial, find the data stream configuration titled, **Collect metrics from System instances**.
2. Scroll to **System CPU metrics** and under **Advanced options** select **Add custom pipeline**.

This will take you to the **Create pipeline** workflow in **Stack management**.
This will take you to the **Create pipeline** workflow.



Expand Down
4 changes: 2 additions & 2 deletions reference/fleet/data-streams-scenario1.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This tutorial explains how to apply a custom index lifecycle policy to all of th

## Step 1: Create an index lifecycle policy [data-streams-scenario1-step1]

1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Click **Create policy**.

Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**.
Expand All @@ -32,7 +32,7 @@ Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize

The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices:

1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Select **Index Templates**.
3. Search for `system` to see all index templates associated with the System integration.
4. Select any `logs-*` index template to view the associated component templates. For example, you can select the `logs-system.application` index template.
Expand Down
4 changes: 2 additions & 2 deletions reference/fleet/data-streams-scenario2.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This tutorial explains how to apply a custom index lifecycle policy to the `logs

## Step 1: Create an index lifecycle policy [data-streams-scenario2-step1]

1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Click **Create policy**.

Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**.
Expand All @@ -27,7 +27,7 @@ Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize

The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices:

1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2. Select **Index Templates**.
3. Search for `system` to see all index templates associated with the System integration.
4. Select the index template that matches the data stream for which you want to set up an ILM policy. For this example, you can select the `logs-system.auth` index template.
Expand Down
8 changes: 4 additions & 4 deletions reference/fleet/data-streams-scenario3.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ In this scenario, you have {{agent}}s collecting system metrics with the System

The **Data Streams** view in {{kib}} shows you the data streams, index templates, and {{ilm-init}} policies associated with a given integration.

1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Data Streams**.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Data Streams** tab.
2. Search for `system` to see all data streams associated with the System integration.
3. Select the `metrics-system.network-{{namespace}}` data stream to view its associated index template and {{ilm-init}} policy. As you can see, the data stream follows the [Data stream naming scheme](/reference/fleet/data-streams.md#data-streams-naming-scheme) and starts with its type, `metrics-`.

Expand All @@ -51,7 +51,7 @@ For example, to create custom index settings for the `system.network` data strea
metrics-system.network-production@custom
```

1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates**
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Component Templates** tab.
2. Click **Create component template**.
3. Use the template above to set the name—in this case, `metrics-system.network-production@custom`. Click **Next**.
4. Under **Index settings**, set the {{ilm-init}} policy name under the `lifecycle.name` key:
Expand Down Expand Up @@ -86,7 +86,7 @@ Now that you’ve created a component template, you need to create an index temp
::::


1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Index Templates**.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Index Templates** tab.
2. Find the index template you want to clone. The index template will have the `<type>` and `<dataset>` in its name, but not the `<namespace>`. In this case, it’s `metrics-system.network`.
3. Select **Actions** > **Clone**.
4. Set the name of the new index template to `metrics-system.network-production`.
Expand Down Expand Up @@ -144,7 +144,7 @@ If you cloned an index template to customize the data retention policy on an {{e

To update the cloned index template:

1. Navigate to **{{stack-manage-app}}** > **Index Management** > **Index Templates**.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Index Templates** tab.
2. Find the index template you cloned. The index template will have the `<type>` and `<dataset>` in its name.
3. Select **Manage** > **Edit**.
4. Select **(2) Component templates**
Expand Down
2 changes: 1 addition & 1 deletion reference/fleet/data-streams-scenario4.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ If you’ve created a custom integration package, you can apply a single ILM pol

## Step 1: Define the ILM policy [data-streams-scenario4-step1]

1. In {{kib}}, go to **Stack Management** and select **Index Lifecycle Policies**. You can also use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
1. In {{kib}}, go to the **Index Lifecycle Policies** management page. You can also use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
2. Click **Create policy**.
3. Name the policy, configure it as needed, and click **Save policy**.

Expand Down
4 changes: 2 additions & 2 deletions reference/fleet/data-streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,10 +98,10 @@ The `@custom` component template specific to a datastream has higher precedence

You can edit a `@custom` component template to customize your {{es}} indices:

1. Open {{kib}} and navigate to to **{{stack-manage-app}}** > **Index Management** > **Data Streams**.
1. Open {{kib}} and go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then open the **Data Streams** tab.
2. Find and click the name of the integration data stream, such as `logs-cisco_ise.log-default`.
3. Click the index template link for the data stream to see the list of associated component templates.
4. Navigate to **{{stack-manage-app}}** > **Index Management** > **Component Templates**.
4. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and open the **Component Templates** tab.
5. Search for the name of the data stream’s custom component template and click the edit icon.
6. Add any custom index settings, metadata, or mappings. For example, you may want to:

Expand Down
2 changes: 1 addition & 1 deletion reference/fleet/migrate-from-beats-to-elastic-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ When you migrate from {{beats}} to {{agent}}, you have a couple of options for m

If you have existing index lifecycle policies for {{beats}}, it’s highly recommended that you modify the lifecycle policies for {{agent}} to match your previous policy. To do this:

1. In {{kib}}, go to **{{stack-manage-app}} > Index Lifecycle Policies** and search for a {{beats}} policy, for example, **filebeat**. Under **Linked indices**, notice you can view indices linked to the policy. Click the policy name to see the settings.
1. In {{kib}}, go to the **Index Lifecycle Policies** management page and search for a {{beats}} policy, for example, **filebeat**. Under **Linked indices**, notice you can view indices linked to the policy. Click the policy name to see the settings.
2. Click the **logs** policy and, if necessary, edit the settings to match the old policy.
3. Under **Index Lifecycle Policies**, search for another {{beats}} policy, for example, **metricbeat**.
4. Click the **metrics** policy and edit the settings to match the old policy.
Expand Down
Loading
Loading