Skip to content

Commit

Permalink
[DOCS] Updates terms in machine learning datafeed APIs (#44883)
Browse files Browse the repository at this point in the history
  • Loading branch information
lcawl committed Jul 26, 2019
1 parent d4b2d21 commit cef375f
Show file tree
Hide file tree
Showing 7 changed files with 49 additions and 45 deletions.
15 changes: 8 additions & 7 deletions docs/java-rest/high-level/ml/delete-datafeed.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,29 +4,30 @@
:response: AcknowledgedResponse
--
[id="{upid}-delete-datafeed"]
=== Delete Datafeed API
=== Delete datafeed API

Deletes an existing datafeed.

[id="{upid}-{api}-request"]
==== Delete Datafeed Request
==== Delete datafeed request

A +{request}+ object requires a non-null `datafeedId` and can optionally set `force`.

["source","java",subs="attributes,callouts,macros"]
---------------------------------------------------
include-tagged::{doc-tests-file}[{api}-request]
---------------------------------------------------
<1> Use to forcefully delete a started datafeed;
this method is quicker than stopping and deleting the datafeed.
Defaults to `false`.
<1> Use to forcefully delete a started datafeed. This method is quicker than
stopping and deleting the datafeed. Defaults to `false`.

include::../execution.asciidoc[]

[id="{upid}-{api}-response"]
==== Delete Datafeed Response
==== Delete datafeed response

The returned +{response}+ object indicates the acknowledgement of the request:
["source","java",subs="attributes,callouts,macros"]
---------------------------------------------------
include-tagged::{doc-tests-file}[{api}-response]
---------------------------------------------------
<1> `isAcknowledged` was the deletion request acknowledged or not
<1> `isAcknowledged` was the deletion request acknowledged or not.
27 changes: 14 additions & 13 deletions docs/java-rest/high-level/ml/put-datafeed.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,24 @@
:response: PutDatafeedResponse
--
[id="{upid}-{api}"]
=== Put Datafeed API
=== Put datafeed API

The Put Datafeed API can be used to create a new {ml} datafeed
in the cluster. The API accepts a +{request}+ object
Creates a new {ml} datafeed in the cluster. The API accepts a +{request}+ object
as a request and returns a +{response}+.

[id="{upid}-{api}-request"]
==== Put Datafeed Request
==== Put datafeed request

A +{request}+ requires the following argument:

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
<1> The configuration of the {ml} datafeed to create
<1> The configuration of the {ml} datafeed to create.

[id="{upid}-{api}-config"]
==== Datafeed Configuration
==== Datafeed configuration

The `DatafeedConfig` object contains all the details about the {ml} datafeed
configuration.
Expand All @@ -33,10 +32,10 @@ A `DatafeedConfig` requires the following arguments:
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-config]
--------------------------------------------------
<1> The datafeed ID and the job ID
<2> The indices that contain the data to retrieve and feed into the job
<1> The datafeed ID and the {anomaly-job} ID.
<2> The indices that contain the data to retrieve and feed into the {anomaly-job}.

==== Optional Arguments
==== Optional arguments
The following arguments are optional:

["source","java",subs="attributes,callouts,macros"]
Expand All @@ -49,7 +48,8 @@ include-tagged::{doc-tests-file}[{api}-config-set-chunking-config]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-config-set-frequency]
--------------------------------------------------
<1> The interval at which scheduled queries are made while the datafeed runs in real time.
<1> The interval at which scheduled queries are made while the datafeed runs in
real time.

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
Expand All @@ -72,8 +72,9 @@ The window must be larger than the Job's bucket size, but smaller than 24 hours,
and span less than 10,000 buckets.
Defaults to `null`, which causes an appropriate window span to be calculated when
the datafeed runs.
The default `check_window` span calculation is the max between `2h` or `8 * bucket_span`.
To explicitly disable, pass `DelayedDataCheckConfig.disabledDelayedDataCheckConfig()`.
The default `check_window` span calculation is the max between `2h` or
`8 * bucket_span`. To explicitly disable, pass
`DelayedDataCheckConfig.disabledDelayedDataCheckConfig()`.

["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
Expand Down Expand Up @@ -101,4 +102,4 @@ default values:
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-response]
--------------------------------------------------
<1> The created datafeed
<1> The created datafeed.
13 changes: 6 additions & 7 deletions docs/java-rest/high-level/ml/start-datafeed.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,13 @@
:response: StartDatafeedResponse
--
[id="{upid}-{api}"]
=== Start Datafeed API
=== Start datafeed API

The Start Datafeed API provides the ability to start a {ml} datafeed in the cluster.
It accepts a +{request}+ object and responds
with a +{response}+ object.
Starts a {ml} datafeed in the cluster. It accepts a +{request}+ object and
responds with a +{response}+ object.

[id="{upid}-{api}-request"]
==== Start Datafeed Request
==== Start datafeed request

A +{request}+ object is created referencing a non-null `datafeedId`.
All other fields are optional for the request.
Expand All @@ -20,9 +19,9 @@ All other fields are optional for the request.
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
<1> Constructing a new request referencing an existing `datafeedId`
<1> Constructing a new request referencing an existing `datafeedId`.

==== Optional Arguments
==== Optional arguments

The following arguments are optional.

Expand Down
24 changes: 13 additions & 11 deletions docs/java-rest/high-level/ml/update-datafeed.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,13 @@
:response: PutDatafeedResponse
--
[id="{upid}-{api}"]
=== Update Datafeed API
=== Update datafeed API

The Update Datafeed API can be used to update a {ml} datafeed
in the cluster. The API accepts a +{request}+ object
Updates a {ml} datafeed in the cluster. The API accepts a +{request}+ object
as a request and returns a +{response}+.

[id="{upid}-{api}-request"]
==== Update Datafeed Request
==== Update datafeed request

A +{request}+ requires the following argument:

Expand All @@ -22,7 +21,7 @@ include-tagged::{doc-tests-file}[{api}-request]
<1> The updated configuration of the {ml} datafeed

[id="{upid}-{api}-config"]
==== Updated Datafeed Arguments
==== Updated datafeed arguments

A `DatafeedUpdate` requires an existing non-null `datafeedId` and
allows updating various settings.
Expand All @@ -31,12 +30,15 @@ allows updating various settings.
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-config]
--------------------------------------------------
<1> Mandatory, non-null `datafeedId` referencing an existing {ml} datafeed
<2> Optional, set the datafeed Aggregations for data gathering
<3> Optional, the indices that contain the data to retrieve and feed into the job
<1> Mandatory, non-null `datafeedId` referencing an existing {ml} datafeed.
<2> Optional, set the datafeed aggregations for data gathering.
<3> Optional, the indices that contain the data to retrieve and feed into the
{anomaly-job}.
<4> Optional, specifies how data searches are split into time chunks.
<5> Optional, the interval at which scheduled queries are made while the datafeed runs in real time.
<6> Optional, a query to filter the search results by. Defaults to the `match_all` query.
<5> Optional, the interval at which scheduled queries are made while the
datafeed runs in real time.
<6> Optional, a query to filter the search results by. Defaults to the
`match_all` query.
<7> Optional, the time interval behind real time that data is queried.
<8> Optional, allows the use of script fields.
<9> Optional, the `size` parameter used in the searches.
Expand All @@ -53,4 +55,4 @@ the updated {ml} datafeed if it has been successfully updated.
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-response]
--------------------------------------------------
<1> The updated datafeed
<1> The updated datafeed.
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,15 @@ Instantiates a {dfeed}.
[[ml-put-datafeed-prereqs]]
==== {api-prereq-title}

* You must create a job before you create a {dfeed}.
* You must create an {anomaly-job} before you create a {dfeed}.
* If {es} {security-features} are enabled, you must have `manage_ml` or `manage`
cluster privileges to use this API. See
{stack-ov}/security-privileges.html[Security privileges].

[[ml-put-datafeed-desc]]
==== {api-description-title}

You can associate only one {dfeed} to each job.
You can associate only one {dfeed} to each {anomaly-job}.

[IMPORTANT]
====
Expand Down Expand Up @@ -75,7 +75,7 @@ those same roles.

`job_id`::
(Required, string) A numerical character string that uniquely identifies the
job.
{anomaly-job}.

`query`::
(Optional, object) The {es} query domain-specific language (DSL). This value
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ Starts one or more {dfeeds}.
[[ml-start-datafeed-prereqs]]
==== {api-prereq-title}

* Before you can start a {dfeed}, the job must be open. Otherwise, an error
occurs.
* Before you can start a {dfeed}, the {anomaly-job} must be open. Otherwise, an
error occurs.
* If {es} {security-features} are enabled, you must have `manage_ml` or `manage`
cluster privileges to use this API. See
{stack-ov}/security-privileges.html[Security privileges].
Expand All @@ -36,7 +36,8 @@ If you want to analyze from the beginning of a dataset, you can specify any date
earlier than that beginning date.

If you do not specify a start time and the {dfeed} is associated with a new
job, the analysis starts from the earliest time for which data is available.
{anomaly-job}, the analysis starts from the earliest time for which data is
available.

When you start a {dfeed}, you can also specify an end time. If you do so, the
job analyzes data from the start time until the end time, at which point the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ The following properties can be updated after the {dfeed} is created:

`job_id`::
(Optional, string) A numerical character string that uniquely identifies the
job.
{anomaly-job}.

`query`::
(Optional, object) The {es} query domain-specific language (DSL). This value
Expand Down

0 comments on commit cef375f

Please sign in to comment.