Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds job definitions to the Glossary #238

Merged
merged 8 commits into from
Jan 15, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 37 additions & 9 deletions docs/en/glossary/glossary.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,17 @@ include::{es-repo-dir}/glossary.asciidoc[tag=analysis-def]
--

endif::elasticsearch-terms[]
ifdef::cloud-terms[]
ifdef::xpack-terms[]
[[glossary-anomaly-detection-job]] {anomaly-job} ::

{anomaly-jobs-cap} contain the configuration information and metadata
necessary to perform an analytics task. See
{ml-docs}/ml-jobs.html[{ml-jobs-cap}] and the
{ref}/ml-put-job.html[create {anomaly-job} API].
+
//Source: X-Pack
endif::xpack-terms[]
ifdef::cloud-terms[]
[[glossary-zone]] availability zone ::

Contains resources available to a {ece} installation that are isolated from
Expand All @@ -57,7 +66,6 @@ entire availability zone. Also see
//Source: Cloud
endif::cloud-terms[]
ifdef::cloud-terms[]

[[glossary-beats-runner]] beats runner ::

Used to send Filebeat and Metricbeat information to the logging cluster.
Expand All @@ -70,7 +78,7 @@ ifdef::xpack-terms[]

The {ml-features} use the concept of a bucket to divide the time
series into batches for processing. The _bucket span_ is part of the
configuration information for a job. It defines the time interval that is used
configuration information for {anomaly-jobs}. It defines the time interval that is used
to summarize and model the data. This is typically between 5 minutes to 1 hour
and it depends on your data characteristics. When you set the bucket span,
take into account the granularity at which you want to analyze, the frequency
Expand Down Expand Up @@ -190,11 +198,21 @@ Alternatively you can post data from any source directly to a {ml} API.
//Source: X-Pack
endif::xpack-terms[]
ifdef::xpack-terms[]
[[glossary-dataframe-job]] {dfanalytics-job} ::

{dfanalytics-jobs-cap} contain the configuration information and metadata
necessary to perform {ml} analytics tasks on a source index and store the
outcome in a destination index. See
{ml-docs}//ml-dfa-overview.html[{dfanalytics-cap} overview] and the
{ref}/put-dfanalytics.html[create {dfanalytics-job} API].
//Source: X-Pack
endif::xpack-terms[]
ifdef::xpack-terms[]

[[glossary-ml-detector]] detector ::

As part of the configuration information that is associated with an
{anomaly-job}, detectors define the type of analysis that needs to be done. They
As part of the configuration information that is associated with {anomaly-jobs},
detectors define the type of analysis that needs to be done. They
also specify which fields to analyze. You can have more than one detector in a
job, which is more efficient than running multiple jobs against the same data.
+
Expand Down Expand Up @@ -368,11 +386,12 @@ file, syslog, redis, and beats.
endif::logstash-terms[]
ifdef::xpack-terms[]

[[glossary-ml-job]] job ::
[[glossary-ml-job]][[glossary-job]] job ::

Machine learning jobs contain the configuration information and metadata
necessary to perform an analytics task. There are two types: {anomaly-jobs} and
{dfanalytics-jobs}.
{ml-cap} jobs contain the configuration information and metadata
necessary to perform an analytics task. There are two types:
<<glossary-anomaly-detection-job,{anomaly-jobs}>> and
<<glossary-dataframe-job,{dfanalytics-jobs}>>. See also <<glossary-rollup-job>>.
+
//Source: X-Pack
endif::xpack-terms[]
Expand Down Expand Up @@ -565,6 +584,15 @@ by making sure that only authorized hosts become part of the installation.
+
//Source: Cloud
endif::cloud-terms[]
ifdef::xpack-terms[]

[[glossary-rollup-job]] {rollup-job}::

A {rollup-job} contains all the details about how the job should run, when it
indexes documents, and what future queries will be able to execute against the
rollup index. See {ref}/xpack-rollup.html[Rolling up historical data].
//Source: X-Pack
endif::xpack-terms[]
ifdef::elasticsearch-terms[]

[[glossary-routing]] routing ::
Expand Down