Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ You can specify the loss function to be used during {{reganalysis}} when you cre

Consult [the Jupyter notebook on regression loss functions](https://github.com/elastic/examples/tree/master/Machine%20Learning/Regression%20Loss%20Functions) to learn more.

::::{tip}
::::{tip}
The default loss function parameter values work fine for most of the cases. It is highly recommended to use the default values, unless you fully understand the impact of the different loss function parameters.
::::


Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@ You can view the hyperparameter values that were ultimately chosen by expanding

Different hyperparameters may affect the model performance to a different degree. To estimate the importance of the optimized hyperparameters, analysis of variance decomposition is used. The resulting `absolute importance` shows how much the variation of a hyperparameter impacts the variation in the validation loss. Additionally, `relative importance` is also computed which gives the importance of the hyperparameter compared to the rest of the tuneable hyperparameters. The sum of all relative importances is 1. You can check these results in the response of the [get {{dfanalytics-job}} stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-dfanalytics-stats.html).

::::{tip}
::::{tip}
Unless you fully understand the purpose of a hyperparameter, it is highly recommended that you leave it unset and allow hyperparameter optimization to occur.
::::


Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,3 @@ This section explains the more complex concepts of the Elastic {{ml}} {dfanalyti
* [Loss functions for {{regression}} analyses](dfa-regression-lossfunction.md)
* [Hyperparameter optimization](hyperparameters.md)
* [Trained models](ml-trained-models.md)










Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ When you create or edit an {{dfanalytics-job}} in {{kib}}, it simplifies the cre

For each custom URL, you must supply a label. You can also optionally supply a time range. When you link to **Discover** or a {{kib}} dashboard, you’ll have additional options for specifying the pertinent {{data-source}} or dashboard name and query entities.


## String substitution in custom URLs [ml-dfa-url-strings]

You can use dollar sign ($) delimited tokens in a custom URL. These tokens are substituted for the values of the corresponding fields in the result index. For example, a custom URL might resolve to `discover#/?_g=(time:(from:'$earliest$',mode:absolute,to:'$latest$'))&_a=(filters:!(),index:'4b899bcb-fb10-4094-ae70-207d43183ffc',query:(language:kuery,query:'Carrier:"$Carrier$"'))`. In this case, the pertinent value of the `Carrier` field is passed to the target page when you click the link.
Expand All @@ -30,7 +29,6 @@ You can use dollar sign ($) delimited tokens in a custom URL. These tokens are s
When you create your custom URL in {{kib}}, the **Query entities** option is shown only when there are appropriate fields in the index.
::::


The `$earliest$` and `$latest$` tokens pass the beginning and end of the time span of the data to the target page. The tokens are substituted with date-time strings in ISO-8601 format. For example, the following API updates a job to add a custom URL that uses `$earliest$` and `$latest$` tokens:

```console
Expand All @@ -51,6 +49,7 @@ POST _ml/data_frame/analytics/flight-delay-regression/_update
When you click this custom URL, it opens up the **Discover** page and displays source data for the period one hour before and after the date of the default global settings.

::::{tip}

* The custom URL links use pop-ups. You must configure your web browser so that it does not block pop-up windows or create an exception for your {{kib}} URL.
* When creating a link to a {{kib}} dashboard, the URLs for dashboards can be very long. Be careful of typos, end of line characters, and URL encoding. Also ensure you use the appropriate index ID for the target {{kib}} {data-source}.
* The dates substituted for `$earliest$` and `$latest$` tokens are in ISO-8601 format and the target system must understand this format.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,77 +4,61 @@ mapped_pages:
- https://www.elastic.co/guide/en/machine-learning/current/ml-dfa-limitations.html
---



# Limitations [ml-dfa-limitations]


The following limitations and known problems apply to the 9.0.0-beta1 release of the Elastic {{dfanalytics}} feature. The limitations are grouped into the following categories:

* [Platform limitations](#dfa-platform-limitations) are related to the platform that hosts the {{ml}} feature of the {{stack}}.
* [Configuration limitations](#dfa-config-limitations) apply to the configuration process of the {{dfanalytics-jobs}}.
* [Operational limitations](#dfa-operational-limitations) affect the behavior of the {{dfanalytics-jobs}} that are running.

## Platform limitations [dfa-platform-limitations]

## Platform limitations [dfa-platform-limitations]


### CPU scheduling improvements apply to Linux and MacOS only [dfa-scheduling-priority]
### CPU scheduling improvements apply to Linux and MacOS only [dfa-scheduling-priority]

When there are many {{ml}} jobs running at the same time and there are insufficient CPU resources, the JVM performance must be prioritized so search and indexing latency remain acceptable. To that end, when CPU is constrained on Linux and MacOS environments, the CPU scheduling priority of native analysis processes is reduced to favor the {{es}} JVM. This improvement does not apply to Windows environments.

## Configuration limitations [dfa-config-limitations]

## Configuration limitations [dfa-config-limitations]


### {{ccs-cap}} is not supported [dfa-ccs-limitations]
### {{ccs-cap}} is not supported [dfa-ccs-limitations]

{{ccs-cap}} is not supported for {{dfanalytics}}.


### Nested fields are not supported [dfa-nested-fields-limitations]
### Nested fields are not supported [dfa-nested-fields-limitations]

Nested fields are not supported for {{dfanalytics-jobs}}. These fields are ignored during the analysis. If a nested field is selected as the dependent variable for {{classification}} or {{reganalysis}}, an error occurs.


### {{dfanalytics-jobs-cap}} cannot be updated [dfa-update-limitations]
### {{dfanalytics-jobs-cap}} cannot be updated [dfa-update-limitations]

You cannot update {{dfanalytics}} configurations. Instead, delete the {{dfanalytics-job}} and create a new one.


### {{dfanalytics-cap}} memory limitation [dfa-dataframe-size-limitations]
### {{dfanalytics-cap}} memory limitation [dfa-dataframe-size-limitations]

{{dfanalytics-cap}} can only perform analyses that fit into the memory available for {{ml}}. Overspill to disk is not currently possible. For general {{ml}} settings, see [{{ml-cap}} settings in {{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/ml-settings.html).

When you create a {{dfanalytics-job}} and the inference step of the process fails due to the model is too large to fit into JVM, follow the steps in [this GitHub issue](https://github.com/elastic/elasticsearch/issues/76093) for a workaround.


### {{dfanalytics-jobs-cap}} cannot use more than 232 documents for training [dfa-training-docs]
### {{dfanalytics-jobs-cap}} cannot use more than 232 documents for training [dfa-training-docs]

A {{dfanalytics-job}} that would use more than 232 documents for training cannot be started. The limitation applies only for documents participating in training the model. If your source index contains more than 232 documents, set the `training_percent` to a value that represents less than 232 documents.


### Trained models created in 7.8 are not backwards compatible [dfa-inference-bwc]
### Trained models created in 7.8 are not backwards compatible [dfa-inference-bwc]

Trained models created in version 7.8.0 are not backwards compatible with older node versions. In a mixed cluster environment, all nodes must be at least 7.8.0 to use a model created on a 7.8.0 node.

## Operational limitations [dfa-operational-limitations]

## Operational limitations [dfa-operational-limitations]


### Deleting a {{dfanalytics-job}} does not delete the destination index [dfa-deletion-limitations]
### Deleting a {{dfanalytics-job}} does not delete the destination index [dfa-deletion-limitations]

The [delete {{dfanalytics-job}} API](https://www.elastic.co/guide/en/elasticsearch/reference/current/delete-dfanalytics.html) does not delete the destination index that contains the annotated data of the {{dfanalytics}}. That index must be deleted separately.


### {{dfanalytics-jobs-cap}} runtime may vary [dfa-time-limitations]
### {{dfanalytics-jobs-cap}} runtime may vary [dfa-time-limitations]

The runtime of {{dfanalytics-jobs}} depends on numerous factors, such as the number of data points in the data set, the type of analytics, the number of fields that are included in the analysis, the supplied [hyperparameters](hyperparameters.md), the type of analyzed fields, and so on. For this reason, a general runtime value that applies to all or most of the situations does not exist. The runtime of a {{dfanalytics-job}} may take from a couple of minutes up to many hours in extreme cases.

The runtime increases with an increasing number of analyzed fields in a nearly linear fashion. For data sets of more than 100,000 points, start with a low training percent. Run a few {{dfanalytics-jobs}} to see how the runtime scales with the increased number of data points and how the quality of results scales with an increased training percentage.


### {{dfanalytics-jobs-cap}} may restart after an {{es}} upgrade [dfa-restart]
### {{dfanalytics-jobs-cap}} may restart after an {{es}} upgrade [dfa-restart]

A {{dfanalytics-job}} may be restarted from the beginning in the following cases:

Expand All @@ -84,38 +68,30 @@ A {{dfanalytics-job}} may be restarted from the beginning in the following cases

If any of these conditions applies, the destination index of the {{dfanalytics-job}} is deleted and the job starts again from the beginning – regardless of the phase where the job was in.


### Documents with values of multi-element arrays in analyzed fields are skipped [dfa-multi-arrays-limitations]
### Documents with values of multi-element arrays in analyzed fields are skipped [dfa-multi-arrays-limitations]

If the value of an analyzed field (field that is subect of the {{dfanalytics}}) in a document is an array with more than one element, the document that contains this field is skipped during the analysis.


### {{oldetection-cap}} field types [dfa-od-field-type-docs-limitations]
### {{oldetection-cap}} field types [dfa-od-field-type-docs-limitations]

{{oldetection-cap}} requires numeric or boolean data to analyze. The algorithms don’t support missing values, therefore fields that have data types other than numeric or boolean are ignored. Documents where included fields contain missing values, null values, or an array are also ignored. Therefore a destination index may contain documents that don’t have an {{olscore}}. These documents are still reindexed from the source index to the destination index, but they are not included in the {{oldetection}} analysis and therefore no {{olscore}} is computed.


### {{regression-cap}} field types [dfa-regression-field-type-docs-limitations]
### {{regression-cap}} field types [dfa-regression-field-type-docs-limitations]

{{regression-cap}} supports fields that are numeric, boolean, text, keyword and ip. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array are also ignored. Documents in the destination index that don’t contain a results field are not included in the {{reganalysis}}.


### {{classification-cap}} field types [dfa-classification-field-type-docs-limitations]
### {{classification-cap}} field types [dfa-classification-field-type-docs-limitations]

{{classification-cap}} supports fields that have numeric, boolean, text, keyword, or ip data types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array are also ignored. Documents in the destination index that don’t contain a results field are not included in the {{classanalysis}}.


### Imbalanced class sizes affect {{classification}} performance [dfa-classification-imbalanced-classes]
### Imbalanced class sizes affect {{classification}} performance [dfa-classification-imbalanced-classes]

If your training data is very imbalanced, {{classanalysis}} may not provide good predictions. Try to avoid highly imbalanced situations. We recommend having at least 50 examples of each class and a ratio of no more than 10 to 1 for the majority to minority class labels in the training data. If your training data set is very imbalanced, consider downsampling the majority class, upsampling the minority class, or gathering more data.


### Deeply nested objects affect {{infer}} performance [dfa-inference-nested-limitation]
### Deeply nested objects affect {{infer}} performance [dfa-inference-nested-limitation]

If the data that you run inference against contains documents that have a series of combinations of dot delimited and nested fields (for example: `{"a.b": "c", "a": {"b": "c"},...}`), the performance of the operation might be slightly slower. Consider using as simple mapping as possible for the best performance profile.


### Analytics runtime performance may significantly slow down with {{feat-imp}} computation [dfa-feature-importance-limitation]
### Analytics runtime performance may significantly slow down with {{feat-imp}} computation [dfa-feature-importance-limitation]

For complex models (such as those with many deep trees), the calculation of {{feat-imp}} takes significantly more time. If a reduction in runtime is important to you, try strategies such as disabling {{feat-imp}}, reducing the amount of training data (for example by decreasing the training percentage), setting [hyperparameter](hyperparameters.md) values, or only selecting fields that are relevant for analysis.

Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ mapped_pages:

# How data frame analytics jobs work [ml-dfa-phases]


A {{dfanalytics-job}} is essentially a persistent {{es}} task. During its life cycle, it goes through four or five main phases depending on the analysis type:

* reindexing,
Expand All @@ -19,20 +18,17 @@ A {{dfanalytics-job}} is essentially a persistent {{es}} task. During its life c

Let’s take a look at the phases one-by-one.


## Reindexing [ml-dfa-phases-reindex]
## Reindexing [ml-dfa-phases-reindex]

During the reindexing phase the documents from the source index or indices are copied to the destination index. If you want to define settings or mappings, create the index before you start the job. Otherwise, the job creates it using default settings.

Once the destination index is built, the {{dfanalytics-job}} task calls the {{es}} [Reindex API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) to launch the reindexing task.


## Loading data [ml-dfa-phases-load]
## Loading data [ml-dfa-phases-load]

After the reindexing is finished, the job fetches the needed data from the destination index. It converts the data into the format that the analysis process expects, then sends it to the analysis process.


## Analyzing [ml-dfa-phases-analyze]
## Analyzing [ml-dfa-phases-analyze]

In this phase, the job generates a {{ml}} model for analyzing the data. The specific phases of analysis vary depending on the type of {{dfanalytics-job}}.

Expand All @@ -45,15 +41,12 @@ In this phase, the job generates a {{ml}} model for analyzing the data. The spec
3. `fine_tuning_parameters`: Identifies final values for undefined hyperparameters. See [hyperparameter optimization](hyperparameters.md).
4. `final_training`: Trains the {{ml}} model.


## Writing results [ml-dfa-phases-write]
## Writing results [ml-dfa-phases-write]

After the loaded data is analyzed, the analysis process sends back the results. Only the additional fields that the analysis calculated are written back, the ones that have been loaded in the loading data phase are not. The {{dfanalytics-job}} matches the results with the data rows in the destination index, merges them, and indexes them back to the destination index.


## {{infer-cap}} [ml-dfa-phases-inference]
## {{infer-cap}} [ml-dfa-phases-inference]

This phase exists only for {{regression}} and {{classification}} jobs. In this phase, the job validates the trained model against the test split of the data set.

Finally, after all phases are completed, the task is marked as completed and the {{dfanalytics-job}} stops. Your data is ready to be evaluated.

Finally, after all phases are completed, the task is marked as completed and the {{dfanalytics-job}} stops. Your data is ready to be evaluated.
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,3 @@ mapped_pages:
This section contains further resources for using {{dfanalytics}}.

* [Limitations](ml-dfa-limitations.md)


Loading
Loading