diff --git a/README.md b/README.md index 349684aef62..9ff139f8d66 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ For questions and feature requests, visit the [discussion forum](https://discuss ## Getting Started -To get started with APM, see our [Quick start guide](https://www.elastic.co/guide/en/apm/get-started/current/install-and-run.html). +To get started with APM, see our [Quick start guide](https://www.elastic.co/guide/en/apm/guide/current/apm-quick-start.html). ## APM Server Development @@ -154,5 +154,13 @@ Building pre-release images can be done by running `make package-docker-snapshot ## Documentation -[Documentation](https://www.elastic.co/guide/en/apm/server/current/index.html) -for the APM Server can be found in the `docs` and `dev_docs` folders. +Documentation for the APM Server can be found in the [APM guide](https://www.elastic.co/guide/en/apm/guide/8.11/index.html). Most documentation files live in the [elastic/observability-docs](https://github.com/elastic/observability-docs) repo's [`docs/en/apm-server/` directory](https://github.com/elastic/observability-docs/tree/8.11/docs/en/apm-server). + +However, the following content lives in this repo: + +* The **changelog** page listing all release notes is in [`CHANGELOG.asciidoc`](/CHANGELOG.asciidoc). +* Each minor version's **release notes** are documented in individual files in the [`changelogs/`](/changelogs/) directory. +* A list of all **breaking changes** are documented in [`changelogs/all-breaking-changes.asciidoc`](/changelogs/all-breaking-changes.asciidoc). +* **Sample data sets** that are injected into the docs are in the [`docs/data/`](/docs/data/) directory. +* **Specifications** that are injected into the docs are in the [`docs/spec/`](/docs/spec/) directory. + diff --git a/dev_docs/RELEASES.md b/dev_docs/RELEASES.md index f16f9d73c08..4021fbe449f 100644 --- a/dev_docs/RELEASES.md +++ b/dev_docs/RELEASES.md @@ -9,7 +9,7 @@ For patch releases, only the version on the existing major and minor version bra ## Feature Freeze -* For patch releases, ensure all relevant backport PRs are merged. +* For patch releases, ensure all relevant backport PRs are merged. We use backport labels on PRs and automation to ensure labels are set. * Update Changelog @@ -30,9 +30,9 @@ For patch releases, only the version on the existing major and minor version bra Update versions and ensure that the `BEATS_VERSION` in the Makefile is updated, e.g. [#2803](https://github.com/elastic/apm-server/pull/2803/files). Trigger a new beats update, once the beats branch is also created. - Remove the [changelogs/head.asciidoc](https://github.com/elastic/apm-server/blob/main/changelogs/head.asciidoc) file from the release branch. + Remove the [changelogs/head.asciidoc](https://github.com/elastic/apm-server/blob/main/changelogs/head.asciidoc) file from the release branch. - * Main branch: + * Main branch: Update [.mergify.yml](https://github.com/elastic/apm-server/blob/main/.mergify.yml) with a new backport rule for the next version, and update versions to next minor version, e.g. [#2804](https://github.com/elastic/apm-server/pull/2804). @@ -86,7 +86,7 @@ For patch releases, only the version on the existing major and minor version bra ## When compatibility between Agents & Server changes -* Update the [agent/server compatibility matrix](https://github.com/elastic/apm-server/blob/main/docs/guide/agent-server-compatibility.asciidoc). +* Update the [agent/server compatibility matrix](https://github.com/elastic/observability-docs/blob/main/docs/en/observability/apm/agent-server-compatibility.asciidoc) in the elastic/observability repo. ## Templates diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000000..0116255f34c --- /dev/null +++ b/docs/README.md @@ -0,0 +1,10 @@ +> [!IMPORTANT] +> Most documentation source files have moved to the [elastic/observability-docs](https://github.com/elastic/observability-docs) repo ([`docs/en/apm-server/`](https://github.com/elastic/observability-docs/tree/8.11/docs/en/apm-server)). +> +> However, the following content still lives in this repo: +> +> * The **changelog** page listing all release notes is in [`CHANGELOG.asciidoc`](/CHANGELOG.asciidoc). +> * Each minor version's **release notes** are documented in individual files in the [`changelogs/`](/changelogs/) directory. +> * A list of all **breaking changes** are documented in [`changelogs/all-breaking-changes.asciidoc`](/changelogs/all-breaking-changes.asciidoc). +> * **Sample data sets** that are injected into the docs are in the [`docs/data/`](/docs/data/) directory. +> * **Specifications** that are injected into the docs are in the [`docs/spec/`](/docs/spec/) directory. diff --git a/docs/access-api-keys.asciidoc b/docs/access-api-keys.asciidoc deleted file mode 100644 index fa5b9fe8c77..00000000000 --- a/docs/access-api-keys.asciidoc +++ /dev/null @@ -1,173 +0,0 @@ -[role="xpack"] -[[beats-api-keys]] -=== Grant access using API keys - -Instead of using usernames and passwords, you can use API keys to grant -access to {es} resources. You can set API keys to expire at a certain time, -and you can explicitly invalidate them. Any user with the `manage_api_key` -or `manage_own_api_key` cluster privilege can create API keys. - -{beatname_uc} instances typically send both collected data and monitoring -information to {es}. If you are sending both to the same cluster, you can use the same -API key. For different clusters, you need to use an API key per cluster. - -NOTE: For security reasons, we recommend using a unique API key per {beatname_uc} instance. -You can create as many API keys per user as necessary. - -[float] -[[beats-api-key-publish]] -==== Create an API key for writing events - -In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. - -[role="screenshot"] -image::images/server-api-key-create.png[API key creation] - -Enter a name for your API key and select **Restrict privileges**. -In the role descriptors box, assign the appropriate privileges to the new API key. For example: - -[source,json,subs="attributes,callouts"] ----- -{ - "{beat_default_index_prefix}_writer": { - "index": [ - { - "names": ["{beat_default_index_prefix}-*"], - "privileges": ["create_index", "create_doc"] - } - ] - }, - "{beat_default_index_prefix}_sourcemap": { - "index": [ - { - "names": [".apm-source-map"], - "privileges": ["read"] - } - ] - }, - "{beat_default_index_prefix}_agentcfg": { - "index": [ - { - "names": [".apm-agent-configuration"], - "privileges": ["read"] - } - ] - } -} ----- - -NOTE: This example only provides privileges for **writing data**. -See <> for additional privileges and information. - -To set an expiration date for the API key, select **Expire after time** -and input the lifetime of the API key in days. - -Click **Create API key**. In the dropdown, switch to **{beats}** and copy the API key. - -You can now use this API key in your +{beatname_lc}.yml+ configuration file: - -["source","yml",subs="attributes"] --------------------- -output.elasticsearch: - api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> --------------------- -<1> Format is `id:api_key` (as shown in the {beats} dropdown) - -[float] -[[beats-api-key-monitor]] -==== Create an API key for monitoring - -In {kib}, navigate to **{stack-manage-app}** > **API keys** and click **Create API key**. - -[role="screenshot"] -image::images/server-api-key-create.png[API key creation] - -Enter a name for your API key and select **Restrict privileges**. -In the role descriptors box, assign the appropriate privileges to the new API key. -For example: - -[source,json,subs="attributes,callouts"] ----- -{ - "{beat_default_index_prefix}_monitoring": { - "index": [ - { - "names": [".monitoring-beats-*"], - "privileges": ["create_index", "create_doc"] - } - ] - } -} ----- - -NOTE: This example only provides privileges for **publishing monitoring data**. -See <> for additional privileges and information. - -To set an expiration date for the API key, select **Expire after time** -and input the lifetime of the API key in days. - -Click **Create API key**. In the dropdown, switch to **{beats}** and copy the API key. - -You can now use this API key in your +{beatname_lc}.yml+ configuration file like this: - -["source","yml",subs="attributes"] --------------------- -monitoring.elasticsearch: - api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA <1> --------------------- -<1> Format is `id:api_key` (as shown in the {beats} dropdown) - -[float] -[[beats-api-key-es]] -==== Create an API key with {es} APIs - -You can also use {es}'s {ref}/security-api-create-api-key.html[Create API key API] to create a new API key. -For example: - -[source,console,subs="attributes,callouts"] ------------------------------------------------------------- -POST /_security/api_key -{ - "name": "{beat_default_index_prefix}_host001", <1> - "role_descriptors": { - "{beat_default_index_prefix}_writer": { <2> - "index": [ - { - "names": ["{beat_default_index_prefix}-*"], - "privileges": ["create_index", "create_doc"] - } - ] - }, - "{beat_default_index_prefix}_sourcemap": { - "index": [ - { - "names": [".apm-source-map"], - "privileges": ["read"] - } - ] - }, - "{beat_default_index_prefix}_agentcfg": { - "index": [ - { - "names": [".apm-agent-configuration"], - "privileges": ["read"] - } - ] - } - } -} ------------------------------------------------------------- -<1> Name of the API key -<2> Granted privileges, see <> - -See the {ref}/security-api-create-api-key.html[Create API key] reference for more information. - -[[learn-more-api-keys]] -[float] -==== Learn more about API keys - -See the {es} API key documentation for more information: - -* {ref}/security-api-create-api-key.html[Create API key] -* {ref}/security-api-get-api-key.html[Get API key information] -* {ref}/security-api-invalidate-api-key.html[Invalidate API key] diff --git a/docs/agent-server-compatibility.asciidoc b/docs/agent-server-compatibility.asciidoc deleted file mode 100644 index d092b0450de..00000000000 --- a/docs/agent-server-compatibility.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[agent-server-compatibility]] -=== {apm-agent} compatibility - -The chart below outlines the compatibility between different versions of Elastic APM agents and extensions with the APM integration. - -[options="header"] -|==== -|Language |{apm-agent} version |APM integration version -// APM AWS Lambda extension -.1+|**APM AWS Lambda extension** -|`1.x` |≥ `8.2` - -// Go -.2+|**Go agent** -|`1.x` |≥ `6.5` -|`2.x` |≥ `6.5` - -// iOS -.1+|**iOS agent** -|`0.x` |≥ `7.14` - -// Java -.1+|**Java agent** -|`1.x`|≥ `6.5` - -// .NET -.1+|**.NET agent** -|`1.x` |≥ `6.5` - -// Node -.1+|**Node.js agent** -|`3.x` |≥ `6.6` - -// PHP -.1+|**PHP agent** -|`1.x` |≥ `7.0` - -// Python -.1+|**Python agent** -|`6.x` |≥ `6.6` - -// Ruby -.2+|**Ruby agent** -|`3.x` |≥ `6.5` -|`4.x` |≥ `6.5` - -// RUM -.2+|**JavaScript RUM agent** -|`4.x` |≥ `6.5` -|`5.x` |≥ `7.0` -|==== diff --git a/docs/anonymous-auth.asciidoc b/docs/anonymous-auth.asciidoc deleted file mode 100644 index d1b9bce778f..00000000000 --- a/docs/anonymous-auth.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -[[anonymous-auth]] -=== Anonymous authentication - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -An event is considered to be anonymous if no authentication token can be extracted from the incoming request. -The APM Server's default response to these these requests depends on its configuration: - -[options="header"] -|==== -|Configuration |Default -|An <> or <> is configured | Anonymous requests are rejected and an authentication error is returned. -|No API key or secret token is configured | Anonymous requests are accepted by the APM Server. -|==== - -In some cases, however, it makes sense to allow both authenticated and anonymous requests. -For example, it isn't possible to authenticate requests from front-end services as -the secret token or API key can't be protected. This is the case with the Real User Monitoring (RUM) -agent running in a browser, or the iOS/Swift agent running in a user application. -However, you still likely want to authenticate requests from back-end services. -To solve this problem, you can enable anonymous authentication in the APM Server to allow the -ingestion of unauthenticated client-side APM data while still requiring authentication for server-side services. - -[float] -[[anonymous-auth-config]] -=== Configuring anonymous auth for client-side services - -[NOTE] -==== -You can only enable and configure anonymous authentication if an <> or -<> is configured. If neither are configured, these settings will be ignored. -==== - -include::{tab-widget-dir}/anonymous-auth-widget.asciidoc[] - -[float] -[[derive-client-ip]] -=== Deriving an incoming request's `client.ip` address - -The remote IP address of an incoming request might be different -from the end-user's actual IP address, for example, because of a proxy. For this reason, -the APM Server attempts to derive the IP address of an incoming request from HTTP headers. -The supported headers are parsed in the following order: - -1. `Forwarded` -2. `X-Real-Ip` -3. `X-Forwarded-For` - -If none of these headers are present, the remote address for the incoming request is used. - -[float] -[[derive-client-ip-concerns]] -==== Using a reverse proxy or load balancer - -HTTP headers are easily modified; -it's possible for anyone to spoof the derived `client.ip` value by changing or setting, -for example, the value of the `X-Forwarded-For` header. -For this reason, if any of your clients are not trusted, -we recommend setting up a reverse proxy or load balancer in front of the APM Server. - -Using a proxy allows you to clear any existing IP-forwarding HTTP headers, -and replace them with one set by the proxy. -This prevents malicious users from cycling spoofed IP addresses to bypass the -APM Server's rate limiting feature. diff --git a/docs/api-config.asciidoc b/docs/api-config.asciidoc deleted file mode 100644 index ef78db2272d..00000000000 --- a/docs/api-config.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[[api-config]] -=== Elastic APM agent configuration API - -APM Server exposes API endpoints that allow Elastic APM agents to query the APM Server for configuration changes. -More information on this feature is available in {kibana-ref}/agent-configuration.html[{apm-agent} configuration in {kib}]. - -[float] -[[api-config-endpoint]] -=== Agent configuration endpoints - -[options="header"] -|==== -|Name |Endpoint -|Agent configuration intake |`/config/v1/agents` -|RUM configuration intake |`/config/v1/rum/agents` -|==== - -The Agent configuration endpoints accepts both `HTTP GET` and `HTTP POST` requests. -If an <> or <> is configured, requests to this endpoint must be authenticated. - -[float] -[[api-config-api-get]] -==== HTTP GET - -`service.name` is a required query string parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents?service.name=SERVICE_NAME ------------------------------------------------------------- - -[float] -[[api-config-api-post]] -==== HTTP POST - -Encode parameters as a JSON object in the body. -`service.name` is a required parameter. - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/config/v1/agents -{ - "service": { - "name": "test-service", - "environment": "all" - }, - "CAPTURE_BODY": "off" -} ------------------------------------------------------------- - -[float] -[[api-config-api-response]] -==== Responses - -* Successful - `200` -* APM Server is configured to fetch agent configuration from {es} but the configuration is invalid - `403` -* APM Server is starting up or {es} is unreachable - `503` - -[float] -[[api-config-api-example]] -==== Example request - -Example Agent configuration `GET` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -i http://127.0.0.1:8200/config/v1/agents?service.name=test-service ---------------------------------------------------------------------------- - -Example Agent configuration `POST` request including the service name "test-service": - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/config/v1/agents \ - -H "Authorization: Bearer secret_token" \ - -H 'content-type: application/json' \ - -d '{"service": {"name": "test-service"}}' ---------------------------------------------------------------------------- - -[float] -[[api-config-api-ex-response]] -==== Example response - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -HTTP/1.1 200 OK -Cache-Control: max-age=30, must-revalidate -Content-Type: application/json -Etag: "7b23d63c448a863fa" -Date: Mon, 24 Feb 2020 20:53:07 GMT -Content-Length: 98 - -{ - "capture_body": "off", - "transaction_max_spans": "500", - "transaction_sample_rate": "0.3" -} ---------------------------------------------------------------------------- diff --git a/docs/api-error.asciidoc b/docs/api-error.asciidoc deleted file mode 100644 index 22e3c9da4cb..00000000000 --- a/docs/api-error.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-error]] -==== Errors - -An error or a logged error message captured by an agent occurring in a monitored service. - -[float] -[[api-error-schema]] -==== Error Schema - -APM Server uses JSON Schema to validate requests. The specification for errors is defined on -{github_repo_link}/docs/spec/v2/error.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/error.json[] ----- diff --git a/docs/api-event-example.asciidoc b/docs/api-event-example.asciidoc deleted file mode 100644 index 9297e327c78..00000000000 --- a/docs/api-event-example.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[api-event-example]] -==== Example request body - -A request body example containing one event for all currently supported event types. - -[source,json] ----- -include::./data/intake-api/generated/events.ndjson[] ----- diff --git a/docs/api-events.asciidoc b/docs/api-events.asciidoc deleted file mode 100644 index e83604972f5..00000000000 --- a/docs/api-events.asciidoc +++ /dev/null @@ -1,153 +0,0 @@ -[[api-events]] -=== Elastic APM events intake API - -NOTE: Most users do not need to interact directly with the events intake API. - -The events intake API is what we call the internal protocol that APM agents use to talk to the APM Server. -Agents communicate with the Server by sending events -- captured pieces of information -- in an HTTP request. -Events can be: - -* Transactions -* Spans -* Errors -* Metrics - -Each event is sent as its own line in the HTTP request body. -This is known as http://ndjson.org[newline delimited JSON (NDJSON)]. - -With NDJSON, agents can open an HTTP POST request and use chunked encoding to stream events to the APM Server -as soon as they are recorded in the agent. -This makes it simple for agents to serialize each event to a stream of newline delimited JSON. -The APM Server also treats the HTTP body as a compressed stream and thus reads and handles each event independently. - -See the <> to learn more about the different types of events. - -[[api-events-endpoint]] -[float] -=== Endpoints - -APM Server exposes the following endpoints for Elastic APM agent data intake: - -[options="header"] -|==== -|Name |Endpoint -|APM agent event intake |`/intake/v2/events` -|RUM event intake (v2) |`/intake/v2/rum/events` -|RUM event intake (v3) |`/intake/v3/rum/events` -|==== - -[[api-events-example]] -[float] -=== Request - -Send an `HTTP POST` request to the APM Server `intake/v2/events` endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/events ------------------------------------------------------------- - -From version `8.5.0` onwards, the APM Server supports asynchronous processing of batches. -To request asynchronous processing the `async` query parameter can be set in the POST requst -to the `intake/v2/events` endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v2/events?async=true ------------------------------------------------------------- - -NOTE: Since asynchronous processing defers some of the event processing to the -background and takes place after the client has closed the request, some errors -can't be communicated back to the client and are logged by the APM Server. -Furthermore, asynchronous processing requests will only be scheduled if the APM Server can -service the incoming request, requests that cannot be serviced will receive an internal error -`503` "queue is full" error. - -For <> send an `HTTP POST` request to the APM Server `intake/v3/rum/events` endpoint instead: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/intake/v3/rum/events ------------------------------------------------------------- - -[[api-events-response]] -[float] -=== Response - -On success, the server will respond with a 202 Accepted status code and no body. - -Keep in mind that events can succeed and fail independently of each other. Only if all events succeed does the server respond with a 202. - -[[api-events-errors]] -[float] -=== Errors - -There are two types of errors that the APM Server may return to an agent: - -* Event related errors (typically validation errors) -* Non-event related errors - -The APM Server processes events one after the other. -If an error is encountered while processing an event, -the error encountered as well as the document causing the error are added to an internal array. -The APM Server will only save 5 event related errors. -If it encounters more than 5 event related errors, -the additional errors will not be returned to agent. -Once all events have been processed, -the error response is sent. - -Some errors, not relating to specific events, -may terminate the request immediately. -For example: IP rate limit reached, wrong metadata, etc. -If at any point one of these errors is encountered, -it is added to the internal array and immediately returned. - -An example error response might look something like this: - -[source,json] ------------------------------------------------------------- -{ - "errors": [ - { - "message": "", <1> - "document": "" <2> - },{ - "message": "", - "document": "" - },{ - "message": "", - "document": "" - },{ - "message": "too many requests" <3> - }, - ], - "accepted": 2320 <4> -} ------------------------------------------------------------- - -<1> An event related error -<2> The document causing the error -<3> An immediately returning non-event related error -<4> The number of accepted events - -If you're developing an agent, these errors can be useful for debugging. - -[[api-events-schema-definition]] -[float] -=== Event API Schemas - -The APM Server uses a collection of JSON Schemas for validating requests to the intake API: - -* <> -* <> -* <> -* <> -* <> -* <> - -include::./api-metadata.asciidoc[] -include::./api-transaction.asciidoc[] -include::./api-span.asciidoc[] -include::./api-error.asciidoc[] -include::./api-metricset.asciidoc[] -include::./api-event-example.asciidoc[] diff --git a/docs/api-info.asciidoc b/docs/api-info.asciidoc deleted file mode 100644 index 3bd7544ed54..00000000000 --- a/docs/api-info.asciidoc +++ /dev/null @@ -1,39 +0,0 @@ -[[api-info]] -=== APM Server information API - -The APM Server exposes an API endpoint to query general server information. -This lightweight endpoint is useful as a server up/down health check. - -[float] -[[api-info-endpoint]] -=== Server Information endpoint - -Send an `HTTP GET` request to the server information endpoint: - -[source,bash] ------------------------------------------------------------- -http(s)://{hostname}:{port}/ ------------------------------------------------------------- - -This endpoint always returns an HTTP 200. - -If <> or a <> is configured, requests to this endpoint must be authenticated. - -[float] -[[api-info-examples]] -==== Example - -Example APM Server information request: - -["source","sh",subs="attributes"] ---------------------------------------------------------------------------- -curl -X POST http://127.0.0.1:8200/ \ - -H "Authorization: Bearer secret_token" - -{ - "build_date": "2021-12-18T19:59:06Z", - "build_sha": "24fe620eeff5a19e2133c940c7e5ce1ceddb1445", - "publish_ready": true, - "version": "{version}" -} ---------------------------------------------------------------------------- diff --git a/docs/api-jaeger.asciidoc b/docs/api-jaeger.asciidoc deleted file mode 100644 index af4a8add47c..00000000000 --- a/docs/api-jaeger.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[[api-jaeger]] -=== Jaeger event intake - -Elastic APM natively supports Jaeger, an open-source, distributed tracing system. -<>. - -**Jaeger/gRPC paths** - -[options="header"] -|==== -|Name |Endpoint -|Jaeger span intake |`/jaeger.api_v2.CollectorService/PostSpans` -|Sampling endpoint |`/jaeger.api_v2.SamplingManager/GetSamplingStrategy` -|==== \ No newline at end of file diff --git a/docs/api-keys.asciidoc b/docs/api-keys.asciidoc deleted file mode 100644 index 418724db856..00000000000 --- a/docs/api-keys.asciidoc +++ /dev/null @@ -1,321 +0,0 @@ -[[api-key]] -=== API keys - -IMPORTANT: API keys are sent as plain-text, -so they only provide security when used in combination with <>. - -When enabled, API keys are used to authorize requests to the APM Server. -API keys are not applicable for APM agents running on clients, like the RUM agent, -as there is no way to prevent them from being publicly exposed. - -You can assign one or more unique privileges to each API key: - -* *Agent configuration* (`config_agent:read`): Required for agents to read -{kibana-ref}/agent-configuration.html[Agent configuration remotely]. -* *Ingest* (`event:write`): Required for ingesting agent events. - -To secure the communication between APM Agents and the APM Server with API keys, -make sure <> is enabled, then complete these steps: - -. <> -. <> -. <> -. <> - -[[enable-api-key]] -[float] -=== Enable API keys - -include::{tab-widget-dir}/api-key-widget.asciidoc[] - -[[create-api-key-user]] -[float] -=== Create an API key user in {kib} - -API keys can only have the same or lower access rights than the user that creates them. -Instead of using a superuser account to create API keys, you can create a role with the minimum required -privileges. - -The user creating an {apm-agent} API key must have at least the `manage_own_api_key` cluster privilege -and the APM application-level privileges that it wishes to grant. -In addition, when creating an API key from the {apm-app}, -you'll need the appropriate {kib} Space and Feature privileges. - -The example below uses the {kib} {kibana-ref}/role-management-api.html[role management API] -to create a role named `apm_agent_key_role`. - -[source,js] ----- -POST /_security/role/apm_agent_key_role -{ - "cluster": [ "manage_own_api_key" ], - "applications": [ - { - "application":"apm", - "privileges":[ - "event:write", - "config_agent:read" - ], - "resources":[ "*" ] - }, - { - "application":"kibana-.kibana", - "privileges":[ "feature_apm.all" ], - "resources":[ "space:default" ] <1> - } - ] -} ----- -<1> This example assigns privileges for the default space. - -Assign the newly created `apm_agent_key_role` role to any user that wishes to create {apm-agent} API keys. - -[[create-an-api-key]] -[float] -=== Create an API key in the {apm-app} - -The {apm-app} has a built-in workflow that you can use to easily create and view {apm-agent} API keys. -Only API keys created in the {apm-app} will show up here. - -Using a superuser account, or a user with the role created in the previous step, -open {kib} and navigate to **{observability}** > **APM** > **Settings** > **Agent keys**. -Enter a name for your API key and select at least one privilege. - -For example, to create an API key that can be used to ingest APM events -and read agent central configuration, select `config_agent:read` and `event:write`. - -// lint ignore apm-agent -Click **Create APM Agent key** and copy the Base64 encoded API key. -You will need this for the next step, and you will not be able to view it again. - -[role="screenshot"] -image::images/apm-ui-api-key.png[{apm-app} API key] - -[[agent-api-key]] -[float] -=== Set the API key in your APM agents - -You can now apply your newly created API keys in the configuration of each of your APM agents. -See the relevant agent documentation for additional information: - -// Not relevant for RUM and iOS -* *Go agent*: {apm-go-ref}/configuration.html#config-api-key[`ELASTIC_APM_API_KEY`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-api-key[`ApiKey`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-api-key[`api_key`] -* *Node.js agent*: {apm-node-ref}/configuration.html#api-key[`apiKey`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-api-key[`api_key`] -* *Python agent*: {apm-py-ref}/configuration.html#config-api-key[`api_key`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-api-key[`api_key`] - -[[configure-api-key-alternative]] -[float] -=== Alternate API key creation methods - -API keys can also be created and validated outside of {kib}: - -* <> -* <> - -[[create-api-key-workflow-apm-server]] -[float] -==== APM Server API key workflow - -This API creation method only works with the APM Server binary. - -deprecated::[8.6.0, Users should create API Keys through {kib} or the {es} REST API] - -APM Server provides a command line interface for creating, retrieving, invalidating, and verifying API keys. -Keys created using this method can only be used for communication with APM Server. - -[[create-api-key-subcommands]] -[float] -===== `apikey` subcommands - -include::{docdir}/command-reference.asciidoc[tag=apikey-subcommands] - -[[create-api-key-privileges]] -[float] -===== Privileges - -If privileges are not specified at creation time, the created key will have all privileges. - -* `--agent-config` grants the `config_agent:read` privilege -* `--ingest` grants the `event:write` privilege -* `--sourcemap` grants the `sourcemap:write` privilege - -[[create-api-key-workflow]] -[float] -===== Create an API key - -Create an API key with the `create` subcommand. - -The following example creates an API key with a `name` of `java-001`, -and gives the "agent configuration" and "ingest" privileges. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey create --ingest --agent-config --name java-001 ------ - -The response will look similar to this: - -[source,console-result] --------------------------------------------------- -Name ........... java-001 -Expiration ..... never -Id ............. qT4tz28B1g59zC3uAXfW -API Key ........ rH55zKd5QT6wvs3UbbkxOA (won't be shown again) -Credentials .... cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== (won't be shown again) --------------------------------------------------- - -You should always verify the privileges of an API key after creating it. -Verification can be done using the `verify` subcommand. - -The following example verifies that the `java-001` API key has the "agent configuration" and "ingest" privileges. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey verify --agent-config --ingest --credentials cVQ0dHoyOEIxZzU5ekMzdUFYZlc6ckg1NXpLZDVRVDZ3dnMzVWJia3hPQQ== ------ - -If the API key has the requested privileges, the response will look similar to this: - -[source,console-result] --------------------------------------------------- -Authorized for privilege "event:write"...: Yes -Authorized for privilege "config_agent:read"...: Yes --------------------------------------------------- - -To invalidate an API key, use the `invalidate` subcommand. -Due to {es} caching, there may be a delay between when this subcommand is executed and when it takes effect. - -The following example invalidates the `java-001` API key. - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey invalidate --name java-001 ------ - -The response will look similar to this: - -[source,console-result] --------------------------------------------------- -Invalidated keys ... qT4tz28B1g59zC3uAXfW -Error count ........ 0 --------------------------------------------------- - -A full list of `apikey` subcommands and flags is available in the <>. - -[[create-api-key-workflow-es]] -[float] -==== {es} API key workflow - -It is also possible to create API keys using the {es} -{ref}/security-api-create-api-key.html[create API key API]. - -This example creates an API key named `java-002`: - -[source,kibana] ----- -POST /_security/api_key -{ - "name": "java-002", <1> - "expiration": "1d", <2> - "role_descriptors": { - "apm": { - "applications": [ - { - "application": "apm", - "privileges": ["sourcemap:write", "event:write", "config_agent:read"], <3> - "resources": ["*"] - } - ] - } - } -} ----- -<1> The name of the API key -<2> The expiration time of the API key -<3> Any assigned privileges - -The response will look similar to this: - -[source,console-result] ----- -{ - "id" : "GnrUT3QB7yZbSNxKET6d", - "name" : "java-002", - "expiration" : 1599153532262, - "api_key" : "RhHKisTmQ1aPCHC_TPwOvw" -} ----- - -The `credential` string, which is what agents use to communicate with APM Server, -is a base64 encoded representation of the API key's `id:api_key`. -It can be created like this: - -[source,console-result] --------------------------------------------------- -echo -n GnrUT3QB7yZbSNxKET6d:RhHKisTmQ1aPCHC_TPwOvw | base64 --------------------------------------------------- - -You can verify your API key has been base64-encoded correctly with the -{ref}/security-api-authenticate.html[Authenticate API]: - -["source","sh",subs="attributes"] ------ -curl -H "Authorization: ApiKey R0gzRWIzUUI3eVpiU054S3pYSy06bXQyQWl4TlZUeEcyUjd4cUZDS0NlUQ==" localhost:9200/_security/_authenticate ------ - -If the API key has been encoded correctly, you'll see a response similar to the following: - -[source,console-result] ----- -{ - "username":"1325298603", - "roles":[], - "full_name":null, - "email":null, - "metadata":{ - "saml_nameid_format":"urn:oasis:names:tc:SAML:2.0:nameid-format:transient", - "saml(http://saml.elastic-cloud.com/attributes/principal)":[ - "1325298603" - ], - "saml_roles":[ - "superuser" - ], - "saml_principal":[ - "1325298603" - ], - "saml_nameid":"_7b0ab93bbdbc21d825edf7dca9879bd8d44c0be2", - "saml(http://saml.elastic-cloud.com/attributes/roles)":[ - "superuser" - ] - }, - "enabled":true, - "authentication_realm":{ - "name":"_es_api_key", - "type":"_es_api_key" - }, - "lookup_realm":{ - "name":"_es_api_key", - "type":"_es_api_key" - } -} ----- - -You can then use the APM Server CLI to verify that the API key has the requested privileges: - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey verify --credentials R25yVVQzUUI3eVpiU054S0VUNmQ6UmhIS2lzVG1RMWFQQ0hDX1RQd092dw== ------ - -If the API key has the requested privileges, the response will look similar to this: - -[source,console-result] ----- -Authorized for privilege "config_agent:read"...: Yes -Authorized for privilege "event:write"...: Yes -Authorized for privilege "sourcemap:write"...: Yes ----- diff --git a/docs/api-metadata.asciidoc b/docs/api-metadata.asciidoc deleted file mode 100644 index 04addb0c512..00000000000 --- a/docs/api-metadata.asciidoc +++ /dev/null @@ -1,66 +0,0 @@ -[[api-metadata]] -==== Metadata - -Every new connection to the APM Server starts with a `metadata` stanza. -This provides general metadata concerning the other objects in the stream. - -Rather than send this metadata information from the agent multiple times, -the APM Server hangs on to this information and applies it to other objects in the stream as necessary. - -TIP: Metadata is stored under `context` when viewing documents in {es}. - -* <> -* <> - -[[api-kubernetes-data]] -[float] -==== Kubernetes data - -APM agents automatically read Kubernetes data and send it to the APM Server. -In most instances, agents are able to read this data from inside the container. -If this is not the case, or if you wish to override this data, you can set environment variables for the agents to read. -These environment variable are set via the Kubernetes https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables[Downward API]. -Here's how you would add the environment variables to your Kubernetes pod spec: - -[source,yaml] ----- - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: KUBERNETES_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: KUBERNETES_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: KUBERNETES_POD_UID - valueFrom: - fieldRef: - fieldPath: metadata.uid ----- - -The table below maps these environment variables to the APM metadata event field: - -[options="header"] -|===== -|Environment variable |Metadata field name -| `KUBERNETES_NODE_NAME` |system.kubernetes.node.name -| `KUBERNETES_POD_NAME` |system.kubernetes.pod.name -| `KUBERNETES_NAMESPACE` |system.kubernetes.namespace -| `KUBERNETES_POD_UID` |system.kubernetes.pod.uid -|===== - -[[api-metadata-schema]] -[float] -==== Metadata Schema - -APM Server uses JSON Schema to validate requests. The specification for metadata is defined on -{github_repo_link}/docs/spec/v2/metadata.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/metadata.json[] ----- \ No newline at end of file diff --git a/docs/api-metricset.asciidoc b/docs/api-metricset.asciidoc deleted file mode 100644 index d59ea85d460..00000000000 --- a/docs/api-metricset.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-metricset]] -==== Metrics - -Metrics contain application metric data captured by an {apm-agent}. - -[[api-metricset-schema]] -[float] -==== Metric Schema - -APM Server uses JSON Schema to validate requests. The specification for metrics is defined on -{github_repo_link}/docs/spec/v2/metricset.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/metricset.json[] ----- diff --git a/docs/api-otlp.asciidoc b/docs/api-otlp.asciidoc deleted file mode 100644 index 99f583c4669..00000000000 --- a/docs/api-otlp.asciidoc +++ /dev/null @@ -1,37 +0,0 @@ -[[api-otlp]] -=== OpenTelemetry intake API - -APM Server supports receiving traces, metrics, and logs over the -https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP)]. -OTLP is the default transfer protocol for OpenTelemetry and is supported natively by APM Server. - -APM Server supports two OTLP communication protocols on the same port: - -* OTLP/HTTP (protobuf) -* OTLP/gRPC - -[discrete] -=== OTLP/gRPC paths - -[options="header"] -|==== -|Name |Endpoint -|OTLP metrics intake |`/opentelemetry.proto.collector.metrics.v1.MetricsService/Export` -|OTLP trace intake |`/opentelemetry.proto.collector.trace.v1.TraceService/Export` -|OTLP logs intake |`/opentelemetry.proto.collector.logs.v1.LogsService/Export` -|==== - -[discrete] -==== OTLP/HTTP paths - -[options="header"] -|==== -|Name |Endpoint -|OTLP metrics intake |`/v1/metrics` -|OTLP trace intake |`/v1/traces` -|OTLP logs intake |`/v1/logs` -|==== - -TIP: See our OpenTelemetry documentation to learn how to send data to the APM Server from an -<> or -<>. diff --git a/docs/api-span.asciidoc b/docs/api-span.asciidoc deleted file mode 100644 index 96d75d31d75..00000000000 --- a/docs/api-span.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-span]] -==== Spans - -Spans are events captured by an agent occurring in a monitored service. - -[[api-span-schema]] -[float] -==== Span Schema - -APM Server uses JSON Schema to validate requests. The specification for spans is defined on -{github_repo_link}/docs/spec/v2/span.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/span.json[] ----- diff --git a/docs/api-transaction.asciidoc b/docs/api-transaction.asciidoc deleted file mode 100644 index 758496ebcb2..00000000000 --- a/docs/api-transaction.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api-transaction]] -==== Transactions - -Transactions are events corresponding to an incoming request or similar task occurring in a monitored service. - -[[api-transaction-schema]] -[float] -==== Transaction Schema - -APM Server uses JSON Schema to validate requests. The specification for transactions is defined on -{github_repo_link}/docs/spec/v2/transaction.json[GitHub] and included below: - -[source,json] ----- -include::./spec/v2/transaction.json[] ----- diff --git a/docs/api.asciidoc b/docs/api.asciidoc deleted file mode 100644 index 61e36d21a7a..00000000000 --- a/docs/api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[api]] -== API - -The APM Server exposes endpoints for: - -* <> -* <> -* <> -* <> -* <> - -include::./api-info.asciidoc[] -include::./api-events.asciidoc[] -include::./api-config.asciidoc[] -include::./api-otlp.asciidoc[] -include::./api-jaeger.asciidoc[] diff --git a/docs/apm-breaking.asciidoc b/docs/apm-breaking.asciidoc deleted file mode 100644 index 58335f86b1b..00000000000 --- a/docs/apm-breaking.asciidoc +++ /dev/null @@ -1,289 +0,0 @@ -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -[[apm-breaking]] -=== Breaking Changes - -This section describes the breaking changes and deprecations introduced in this release -and previous minor versions. - -// tag::811-bc[] -[float] -[[breaking-changes-8.11]] -=== 8.11 - -The following breaking changes are introduced in APM version 8.11.0: - -- The `ecs.version` field has been removed from APM data streams. -This change should not impact most users as this field is not used by the APM UI. -For more details, see https://github.com/elastic/apm-server/pull/11632[PR #11632] -// end::811-bc[] - -// tag::810-bc[] -[float] -[[breaking-changes-8.10]] -=== 8.10 - -The following breaking changes are introduced in APM version 8.10.0: - -- Aggregated metrics now consider global labels to be part of a service's identity, and high cardinality global labels may cause services to be obscured. -For more details, see https://github.com/elastic/apm-server/pull/11386[PR #11386]. - -- Event protobuf encoding for tail-based sampling changed to a more efficient encoding for event timestamp and duration -For more details, see https://github.com/elastic/apm-server/pull/11386[PR #11386]. -// end::810-bc[] - -// tag::87-bc[] -[float] -[[breaking-changes-8.7]] -=== 8.7 - -The following breaking changes and deprecations are introduced in APM version 8.7.0: - -- `transaction.failure_count` has been removed. `transaction.success_count` type has changed to `aggregated_metric_double`. -For more details, see https://github.com/elastic/apm-server/pull/9791[PR #9791]. - -- `transaction.success_count` has been moved to `event.success_count`. -For more details, see https://github.com/elastic/apm-server/pull/9819[PR #9819]. - -- Stopped indexing transaction metrics to `metrics-apm.internal`. -For more details, see https://github.com/elastic/apm-server/pull/9846[PR #9846]. - -- Stopped indexing span destination metrics to `metrics-apm.internal`. -For more details, see https://github.com/elastic/apm-server/pull/9926[PR #9926]. - -- `apmserver.aggregation.txmetrics.overflowed` metric has been renamed to `apmserver.aggregation.txmetrics.overflowed.total`. -For more details, see https://github.com/elastic/apm-server/pull/10330[PR #10330]. - -- Elasticsearch source mapping credentials now require access to the `.apm-source-map` index. -For more details, see https://github.com/elastic/apm-server/pull/9722[PR #9722]. - -- Changed APM Server default host to `127.0.0.1`. -For more details, see https://github.com/elastic/apm-server/pull/9877[PR #9877]. -// end::87-bc[] - -// tag::86-bc[] -[float] -[[breaking-changes-8.6]] -=== 8.6 - -The following breaking changes and deprecations are introduced in APM version 8.6.0: - -[float] -==== `apm-server.decoder.*` no longer recorded -The stack monitoring metrics, `apm-server.decoder.*`, are no longer recorded. -These metrics were not used by stack monitoring, so there should be no noticeable change. - -For more details, see https://github.com/elastic/apm-server/pull/9210[PR #9210]. - -[float] -==== `context.http.response.*_size` fields now enforce integer values -New field mappings enforce integer values for `context.http.response.*_size`. -The fields are mapped with `index: false` to minimize storage overhead. - -For more details, see https://github.com/elastic/apm-server/pull/9429[PR #9429]. - -[float] -==== `observer.id` and `observer.ephemeral_id` removed - -`observer.id` and `observer.ephemeral_id` are no longer added to APM documents. -The APM UI does not currently rely on these field, so there should be no noticeable change. - -For more details, see https://github.com/elastic/apm-server/pull/9412[PR #9412]. - -[float] -==== `timeseries.instance` removed -`timeseries.instance` has been removed from transaction metrics docs. -The APM UI did not use this field, so there should be no noticeable change. - -For more details, see https://github.com/elastic/apm-server/pull/9565[PR #9565]. - -// end::86-bc[] - -[float] -[[breaking-changes-8.2]] -=== 8.2 - -// tag::82-bc[] -The following breaking changes are introduced in APM version 8.2.0: - -[float] -==== APM Server now emits events with `event.duration` - -APM Server no longer emits events with a `transaction.duration.us` or `span.duration.us`. -Instead, events are emitted with an `event.duration`. -An ingest pipeline sets the legacy `.duration.us` field and removes the `event.duration`. - -This change will impact users who are not using APM Server's {es} output or the packaged ingest pipeline. -For details, see https://github.com/elastic/apm-server/pull/7261[PR #7261]. - -[float] -==== Removed `observer.version_major` - -The field `observer.version_major` is non-standard and existed only for the APM UI to filter out legacy docs (versions <7.0). -This check is no longer performed, so the field has been removed. - -For details, see https://github.com/elastic/apm-server/pull/7399[PR #7399]. - -[float] -==== APM Server no longer ships with System V init scripts or the go-daemon wrapper - -As of version 8.1.0, all Linux distributions supported by APM Server support systemd. -As a result, APM Server no longer ships with System V init scripts or the go-daemon wrapper; use systemd instead. - -For details, see https://github.com/elastic/apm-server/pull/7576[PR #7576]. - -[float] -==== Deprecated 32-bit architectures - -APM Server support for 32-bit architectures has been deprecated and will be removed in a future release. -// end::82-bc[] - -[float] -[[breaking-changes-8.1]] -=== 8.1 - -// tag::81-bc[] -There are no breaking changes in APM. -// end::81-bc[] - -[float] -[[breaking-changes-8.0]] -=== 8.0 - -// tag::80-bc[] -The following breaking changes are introduced in APM version 8.0. - -[float] -==== Indices are now manged by {fleet} - -All index management has been removed from APM Server; -{fleet} is now entirely responsible for setting up index templates, index lifecycle polices, -and index pipelines. - -As a part of this change, the following settings have been removed: - -* `apm-server.ilm.*` -* `apm-server.register.ingest.pipeline.*` -* `setup.*` - -[float] -==== Data streams by default - -APM Server now only writes to well-defined data streams; -writing to classic indices is no longer supported. - -As a part of this change, the following settings have been removed: - -* `apm-server.data_streams.enabled` -* `output.elasticsearch.index` -* `output.elasticsearch.indices` -* `output.elasticsearch.pipeline` -* `output.elasticsearch.pipelines` - -[float] -==== New {es} output - -APM Server has a new {es} output implementation; it is no longer necessary to manually -tune the output of APM Server. - -As a part of this change, the following settings have been removed: - -* `output.elasticsearch.bulk_max_size` -* `output.elasticsearch.worker` -* `queue.*` - -[float] -==== New source map upload endpoint - -The source map upload endpoint has been removed from APM Server. -Source maps should now be uploaded directly to {kib} instead. - -[float] -==== Legacy Jaeger endpoints have been removed - -The legacy Jaeger gRPC and HTTP endpoints have been removed from APM Server. - -As a part of this change, the following settings have been removed: - -* `apm-server.jaeger` - -[float] -==== Homebrew no longer supported - -APM Server no longer supports installation via Homebrew. - -[float] -==== All removed and changed settings - -Below is a list of all **removed settings** (in alphabetical order) for -users upgrading a standalone APM Server to {stack} version 8.0. - -[source,yml] ----- -apm-server.data_streams.enabled -apm-server.ilm.* -apm-server.jaeger -apm-server.register.ingest.pipeline.* -apm-server.sampling.keep_unsampled -output.elasticsearch.bulk_max_size -output.elasticsearch.index -output.elasticsearch.indices -output.elasticsearch.pipeline -output.elasticsearch.pipelines -output.elasticsearch.worker -queue.* -setup.* ----- - -Below is a list of **renamed settings** (in alphabetical order) for -users upgrading a standalone APM Server to {stack} version 8.0. - -[source,yml] ----- -previous setting --> new setting - -apm-server.api_key --> apm-server.auth.api_key -apm-server.instrumentation --> instrumentation -apm-server.rum.allowed_service --> apm-server.auth.anonymous.allow_service -apm-server.rum.event_rate --> apm-server.auth.anonymous.rate_limit -apm-server.secret_token --> apm-server.auth.secret_token ----- - -[float] -==== Supported {ecloud} settings - -Below is a list of all **supported settings** (in alphabetical order) for -users upgrading an {ecloud} standalone cluster to {stack} version 8.0. -Any previously supported settings not listed below will be removed when upgrading. - -[source,yml] ----- -apm-server.agent.config.cache.expiration -apm-server.aggregation.transactions.* -apm-server.auth.anonymous.allow_agent -apm-server.auth.anonymous.allow_service -apm-server.auth.anonymous.rate_limit.event_limit -apm-server.auth.anonymous.rate_limit.ip_limit -apm-server.auth.api_key.enabled -apm-server.auth.api_key.limit -apm-server.capture_personal_data -apm-server.default_service_environment -apm-server.max_event_size -apm-server.rum.allow_headers -apm-server.rum.allow_origins -apm-server.rum.enabled -apm-server.rum.exclude_from_grouping -apm-server.rum.library_pattern -apm-server.rum.source_mapping.enabled -apm-server.rum.source_mapping.cache.expiration -logging.level -logging.selectors -logging.metrics.enabled -logging.metrics.period -max_procs -output.elasticsearch.flush_bytes -output.elasticsearch.flush_interval ----- - -// end::80-bc[] diff --git a/docs/apm-data-security.asciidoc b/docs/apm-data-security.asciidoc deleted file mode 100644 index 071bc69e2b2..00000000000 --- a/docs/apm-data-security.asciidoc +++ /dev/null @@ -1,596 +0,0 @@ -[[apm-data-security]] -=== Data security - -When setting up Elastic APM, it's essential to review all captured data carefully to ensure -it doesn't contain sensitive information like passwords, credit card numbers, or health data. -In addition, you may wish to filter out other identifiable information, like IP addresses, user agent information, -or form field data. - -Depending on the type of data, we offer several different ways to filter, manipulate, -or obfuscate sensitive information during or before ingestion: - -* <> -* <> - -In addition to utilizing filters, you should regularly review the <> table to ensure -sensitive data is not being ingested. If it is, it's possible to remove or redact it. -See <> for more information. - -[float] -[[built-in-data-filters]] -==== Built-in data filters - -// tag::data-filters[] -Built-in data filters allow you to filter or turn off ingestion of the following types of data: - -[options="header"] -|==== -|Data type |Common sensitive data -|<> |Passwords, credit card numbers, authorization, etc. -|<> |Passwords, credit card numbers, etc. -|<> |Client IP address and user agent. -|<> |URLs visited, click events, user browser errors, resources used, etc. -|<> |Sensitive user or business information -|==== -// end::data-filters[] - -[float] -[[custom-data-filters]] -==== Custom filters - -// tag::custom-filters[] -Custom filters allow you to filter or redact other types of APM data on ingestion: - -|==== -|<> | Applied at ingestion time. -All agents and fields are supported. Data leaves the instrumented service. -There are no performance overhead implications on the instrumented service. - -|<> | Not supported by all agents. -Data is sanitized before leaving the instrumented service. -Potential overhead implications on the instrumented service -|==== -// end::custom-filters[] - -[float] -[[sensitive-fields]] -==== Sensitive fields - -You should review the following fields regularly to ensure sensitive data is not being captured: - -[options="header"] -|==== -| Field | Description | Remedy -| `client.ip` | The client IP address, as forwarded by proxy. | <> -| `http.request.body.original` | The body of the monitored HTTP request. | <> -| `http.request.headers` | The canonical headers of the monitored HTTP request. | <> -| `http.request.socket.remote_address` | The address of the last proxy or end-user (if no proxy). | <> -| `http.response.headers` | The canonical headers of the monitored HTTP response. | <> -| `process.args` | Process arguments. | <> -| `span.db.statement` | Database statement. | <> -| `stacktrace.vars` | A flat mapping of local variables captured in the stack frame | <> -| `url.query` | The query string of the request, e.g. `?pass=hunter2`. | <> -| `user.*` | Logged-in user information. | <> -| `user_agent.*` | Device and version making the network request. | <> -|==== - -// **************************************************************** - -[[filtering]] -==== Built-in data filters - -include::./apm-data-security.asciidoc[tag=data-filters] - -[discrete] -[[filters-http-header]] -==== HTTP headers - -By default, APM agents capture HTTP request and response headers (including cookies). -Most Elastic APM agents provide the ability to sanitize HTTP header fields, -including cookies and `application/x-www-form-urlencoded` data (POST form fields). -Query string and captured request bodies, like `application/json` data, are not sanitized. - -The default list of sanitized fields attempts to target common field names for data relating to -passwords, credit card numbers, authorization, etc., but can be customized to fit your data. -This sensitive data never leaves the instrumented service. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-sanitize-field-names[`ELASTIC_APM_SANITIZE_FIELD_NAMES`] -* Java: {apm-java-ref-v}/config-core.html#config-sanitize-field-names[`sanitize_field_names`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-sanitize-field-names[`sanitizeFieldNames`] -* Node.js: {apm-node-ref-v}/configuration.html#sanitize-field-names[`sanitizeFieldNames`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-sanitize-field-names[`sanitize_field_names`] - -Alternatively, you can completely disable the capturing of HTTP headers. -This setting also supports {kibana-ref}/agent-configuration.html[Central configuration]: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-headers[`ELASTIC_APM_CAPTURE_HEADERS`] -* Java: {apm-java-ref-v}/config-core.html#config-capture-headers[`capture_headers`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-headers[`CaptureHeaders`] -* Node.js: {apm-node-ref-v}/configuration.html#capture-headers[`captureHeaders`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-headers[`capture_headers`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-headers[`capture_headers`] - -[discrete] -[[filters-http-body]] -==== HTTP bodies - -By default, the body of HTTP requests is not recorded. -Request bodies often contain sensitive data like passwords or credit card numbers, -so use care when enabling this feature. - -This setting supports {kibana-ref}/agent-configuration.html[Central configuration], -which means the list of sanitized fields can be updated without needing to redeploy your services: - -* Go: {apm-go-ref-v}/configuration.html#config-capture-body[`ELASTIC_APM_CAPTURE_BODY`] -* Java: {apm-java-ref-v}/config-core.html#config-capture-body[`capture_body`] -* .NET: {apm-dotnet-ref-v}/config-http.html#config-capture-body[`CaptureBody`] -* Node.js: {apm-node-ref-v}/configuration.html#capture-body[`captureBody`] -// * PHP: {apm-php-ref-v}[``] -* Python: {apm-py-ref-v}/configuration.html#config-capture-body[`capture_body`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-capture-body[`capture_body`] - -[discrete] -[[filters-personal-data]] -==== Personal data - -By default, the APM Server captures some personal data associated with trace events: - -* `client.ip`: The client's IP address. Typically derived from the HTTP headers of incoming requests. -`client.ip` is also used in conjunction with the {ref}/geoip-processor.html[`geoip` processor] to assign -geographical information to trace events. To learn more about how `client.ip` is derived, -see <>. -* `user_agent`: User agent data, including the client operating system, device name, vendor, and version. - -The capturing of this data can be turned off by setting -**Capture personal data** to `false`. - -[discrete] -[[filters-real-user-data]] -==== Real user monitoring data - -Protecting user data is important. -For that reason, individual RUM instrumentations can be disabled in the RUM agent with the -{apm-rum-ref-v}/configuration.html#disable-instrumentations[`disableInstrumentations`] configuration variable. -Disabled instrumentations produce no spans or transactions. - -[options="header"] -|==== -|Disable |Configuration value -|HTTP requests |`fetch` and `xmlhttprequest` -|Page load metrics including static resources |`page-load` -|JavaScript errors on the browser |`error` -|User click events including URLs visited, mouse clicks, and navigation events |`eventtarget` -|Single page application route changes |`history` -|==== - -[discrete] -[[filters-database-statements]] -==== Database statements - -For SQL databases, APM agents do not capture the parameters of prepared statements. -Note that Elastic APM currently does not make an effort to strip parameters of regular statements. -Not using prepared statements makes your code vulnerable to SQL injection attacks, -so be sure to use prepared statements. - -For non-SQL data stores, such as {es} or MongoDB, -Elastic APM captures the full statement for queries. -For inserts or updates, the full document is not stored. -To filter or obfuscate data in non-SQL database statements, -or to remove the statement entirely, -you can set up an ingest node pipeline. - -[discrete] -[[filters-agent-specific]] -==== Agent-specific options - -Certain agents offer additional filtering and obfuscating options: - -**Agent configuration options** - -* (Node.js) Remove errors raised by the server-side process: -disable with {apm-node-ref-v}/configuration.html#capture-exceptions[captureExceptions]. - -* (Java) Remove process arguments from transactions: -disabled by default with {apm-java-ref-v}/config-reporter.html#config-include-process-args[`include_process_args`]. - -// **************************************************************** - -[[custom-filter]] -==== Custom filters - -include::./apm-data-security.asciidoc[tag=custom-filters] - -[discrete] -[[filters-ingest-pipeline]] -==== Create an ingest pipeline filter - -Ingest node pipelines specify a series of processors that transform data in a specific way. -Transformation happens prior to indexing--inflicting no performance overhead on the monitored application. -Pipelines are a flexible and easy way to filter or obfuscate Elastic APM data. - -[discrete] -[[filters-ingest-pipeline-tutorial]] -===== Tutorial: redact sensitive information - -Say you decide to <> -but quickly notice that sensitive information is being collected in the -`http.request.body.original` field: - -[source,json] ----- -{ - "email": "test@abc.com", - "password": "hunter2" -} ----- - -**Create a pipeline** - -To obfuscate the passwords stored in the request body, -you can use a series of {ref}/processors.html[ingest processors]. -To start, create a pipeline with a simple description and an empty array of processors: - -[source,json] ----- -{ - "pipeline": { - "description": "redact http.request.body.original.password", - "processors": [] <1> - } -} ----- -<1> The processors defined below will go in this array - -**Add a JSON processor** - -Add your first processor to the processors array. -Because the agent captures the request body as a string, use the -{ref}/json-processor.html[JSON processor] to convert the original field value into a structured JSON object. -Save this JSON object in a new field: - -[source,json] ----- -{ - "json": { - "field": "http.request.body.original", - "target_field": "http.request.body.original_json", - "ignore_failure": true - } -} ----- - -**Add a set processor** - -If `body.original_json` is not `null`, i.e., it exists, we'll redact the `password` with the {ref}/set-processor.html[set processor], -by setting the value of `body.original_json.password` to `"redacted"`: - -[source,json] ----- -{ - "set": { - "field": "http.request.body.original_json.password", - "value": "redacted", - "if": "ctx?.http?.request?.body?.original_json != null" - } -} ----- - -**Add a convert processor** - -Use the {ref}/convert-processor.html[convert processor] to convert the JSON value of `body.original_json` to a string and set it as the `body.original` value: - -[source,json] ----- -{ - "convert": { - "field": "http.request.body.original_json", - "target_field": "http.request.body.original", - "type": "string", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } -} ----- - -**Add a remove processor** - -Finally, use the {ref}/remove-processor.html[remove processor] to remove the `body.original_json` field: - -[source,json] ----- -{ - "remove": { - "field": "http.request.body.original", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } -} ----- - -**Register the pipeline** - -Now we'll put it all together. -Use the {ref}/put-pipeline-api.html[create or update pipeline API] to register the new pipeline in {es}. -Name the pipeline `apm_redacted_body_password`: - -[source,console] ----- -PUT _ingest/pipeline/apm_redacted_body_password -{ - "description": "redact http.request.body.original.password", - "processors": [ - { - "json": { - "field": "http.request.body.original", - "target_field": "http.request.body.original_json", - "ignore_failure": true - } - }, - { - "set": { - "field": "http.request.body.original_json.password", - "value": "redacted", - "if": "ctx?.http?.request?.body?.original_json != null" - } - }, - { - "convert": { - "field": "http.request.body.original_json", - "target_field": "http.request.body.original", - "type": "string", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } - }, - { - "remove": { - "field": "http.request.body.original_json", - "if": "ctx?.http?.request?.body?.original_json != null", - "ignore_failure": true - } - } - ] -} ----- - -**Test the pipeline** - -Prior to enabling this new pipeline, you can test it with the {ref}/simulate-pipeline-api.html[simulate pipeline API]. -This API allows you to run multiple documents through a pipeline to ensure it is working correctly. - -The request below simulates running three different documents through the pipeline: - -[source,console] ----- -POST _ingest/pipeline/apm_redacted_body_password/_simulate -{ - "docs": [ - { - "_source": { <1> - "http": { - "request": { - "body": { - "original": """{"email": "test@abc.com", "password": "hunter2"}""" - } - } - } - } - }, - { - "_source": { <2> - "some-other-field": true - } - }, - { - "_source": { <3> - "http": { - "request": { - "body": { - "original": """["invalid json" """ - } - } - } - } - } - ] -} ----- -<1> This document features the same sensitive data from the original example above -<2> This document only contains an unrelated field -<3> This document contains invalid JSON - -The API response should be similar to this: - -[source,json] ----- -{ - "docs" : [ - { - "doc" : { - "_source" : { - "http" : { - "request" : { - "body" : { - "original" : { - "password" : "redacted", - "email" : "test@abc.com" - } - } - } - } - } - } - }, - { - "doc" : { - "_source" : { - "nobody" : true - } - } - }, - { - "doc" : { - "_source" : { - "http" : { - "request" : { - "body" : { - "original" : """["invalid json" """ - } - } - } - } - } - } - ] -} ----- - -As expected, only the first simulated document has a redacted password field. -All other documents are unaffected. - -**Create an `@custom` pipeline** - -The final step in this process is to call the newly created `apm_redacted_body_password` pipeline -from the `@custom` pipeline of the data stream you wish to edit. - -include::./ingest-pipelines.asciidoc[tag=ingest-pipeline-naming] - -Use the {ref}/put-pipeline-api.html[create or update pipeline API] to register the new pipeline in {es}. -Name the pipeline `traces-apm@custom`: - -[source,console] ----- -PUT _ingest/pipeline/traces-apm@custom -{ - "processors": [ - { - "pipeline": { - "name": "apm_redacted_body_password" <1> - } - } - ] -} ----- -<1> The name of the pipeline we previously created - -TIP: If you prefer using a GUI, you can instead open {kib} and navigate to -**Stack Management** -> **Ingest Pipelines** -> **Create pipeline**. -Use the same naming convention explained previously to ensure your new pipeline matches the correct APM data stream. - -That's it! Passwords will now be redacted from your APM HTTP body data. - -To learn more about ingest pipelines, see <>. - -[discrete] -[[filters-in-agent]] -==== APM agent filters - -Some APM agents offer a way to manipulate or drop APM events _before_ they are sent to the APM Server. -Please see the relevant agent's documentation for more information and examples: - -// * Go: {apm-go-ref-v}/[] -// * Java: {apm-java-ref-v}/[] -* .NET: {apm-dotnet-ref-v}/public-api.html#filter-api[Filter API]. -* Node.js: {apm-node-ref-v}/agent-api.html#apm-add-filter[`addFilter()`]. -// * PHP: {apm-php-ref-v}[] -* Python: {apm-py-ref-v}/sanitizing-data.html[custom processors]. -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-add-filter[`add_filter()`]. - -// **************************************************************** - -[[data-security-delete]] -==== Delete sensitive data - -If you accidentally ingest sensitive data, follow these steps to remove or redact the offending data: - -. Stop collecting the sensitive data. -Use the *remedy* column of the <> table to determine how to stop collecting -the offending data. - -. Delete or redact the ingested data. With data collection fixed, you can now delete or redact the offending data: -+ -* <> -* <> - -[float] -[[redact-field-data]] -===== Redact specific fields - -To redact sensitive data in a specific field, use the {ref}/docs-update-by-query.html[update by query API]. - -For example, the following query removes the `client.ip` address -from APM documents in the `logs-apm.error-default` data stream: - -[source, console] ----- -POST /logs-apm.error-default/_update_by_query -{ - "query": { - "exists": { - "field": "client.ip" - } - } - "script": { - "source": "ctx._source.client.ip = params.redacted", - "params": { - "redacted": "[redacted]" - } - } -} ----- - -Or, perhaps you only want to redact IP addresses from European users: - -[source, console] ----- -POST /logs-apm.error-default/_update_by_query -{ - "query": { - "term": { - "client.geo.continent_name": { - "value": "Europe" - } - } - }, - "script": { - "source": "ctx._source.client.ip = params.redacted", - "params": { - "redacted": "[redacted]" - } - } -} ----- - -See {ref}/docs-update-by-query.html[update by query API] for more information and examples. - -[float] -[[delete-doc-data]] -===== Delete {es} documents - -WARNING: This will permanently delete your data. -You should test your queries with the {ref}/search-search.html[search API] prior to deleting data. - -To delete an {es} document, -you can use the {ref}/docs-delete-by-query.html[delete by query API]. - -For example, to delete all documents in the `apm-traces-*` data stream with a `user.email` value, run the following query: - -[source, console] ----- -POST /apm-traces-*/_delete_by_query -{ - "query": { - "exists": { - "field": "user.email" - } - } -} ----- - -See {ref}/docs-delete-by-query.html[delete by query API] for more information and examples. diff --git a/docs/apm-distributed-tracing.asciidoc b/docs/apm-distributed-tracing.asciidoc deleted file mode 100644 index 4cbec26ed6b..00000000000 --- a/docs/apm-distributed-tracing.asciidoc +++ /dev/null @@ -1,131 +0,0 @@ -[[apm-distributed-tracing]] -=== Distributed tracing - -A `trace` is a group of <> and <> with a common root. -Each `trace` tracks the entirety of a single request. -When a `trace` travels through multiple services, as is common in a microservice architecture, -it is known as a distributed trace. - -[float] -[[why-distributed-tracing]] -=== Why is distributed tracing important? - -Distributed tracing enables you to analyze performance throughout your microservice architecture -by tracing the entirety of a request -- from the initial web request on your front-end service -all the way to database queries made on your back-end services. - -Tracking requests as they propagate through your services provides an end-to-end picture of -where your application is spending time, where errors are occurring, and where bottlenecks are forming. -Distributed tracing eliminates individual service's data silos and reveals what's happening outside of -service borders. - -For supported technologies, distributed tracing works out-of-the-box, with no additional configuration required. - -[float] -[[how-distributed-tracing]] -=== How distributed tracing works - -Distributed tracing works by injecting a custom `traceparent` HTTP header into outgoing requests. -This header includes information, like `trace-id`, which is used to identify the current trace, -and `parent-id`, which is used to identify the parent of the current span on incoming requests -or the current span on an outgoing request. - -When a service is working on a request, it checks for the existence of this HTTP header. -If it's missing, the service starts a new trace. -If it exists, the service ensures the current action is added as a child of the existing trace, -and continues to propagate the trace. - -[float] -[[trace-propagation]] -==== Trace propagation examples - -In this example, Elastic's Ruby agent communicates with Elastic's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex1.png[How traceparent propagation works] - -In this example, Elastic's Ruby agent communicates with OpenTelemetry's Java agent. -Both support the `traceparent` header, and trace data is successfully propagated. - -// lint ignore traceparent -image::./images/dt-trace-ex2.png[How traceparent propagation works] - -In this example, the trace meets a piece of middleware that doesn't propagate the `traceparent` header. -The distributed trace ends and any further communication will result in a new trace. - -// lint ignore traceparent -image::./images/dt-trace-ex3.png[How traceparent propagation works] - - -[float] -[[w3c-tracecontext-spec]] -==== W3C Trace Context specification - -All Elastic agents now support the official W3C Trace Context specification and `traceparent` header. -See the table below for the minimum required agent version: - -[options="header"] -|==== -|Agent name |Agent Version -|**Go Agent**| ≥`1.6` -|**Java Agent**| ≥`1.14` -|**.NET Agent**| ≥`1.3` -|**Node.js Agent**| ≥`3.4` -|**PHP Agent**| ≥`1.0` -|**Python Agent**| ≥`5.4` -|**Ruby Agent**| ≥`3.5` -|**RUM Agent**| ≥`5.0` -|==== - -NOTE: Older Elastic agents use a unique `elastic-apm-traceparent` header. -For backward-compatibility purposes, new versions of Elastic agents still support this header. - -[float] -[[visualize-distributed-tracing]] -=== Visualize distributed tracing - -The {apm-app}'s timeline visualization provides a visual deep-dive into each of your application's traces: - -[role="screenshot"] -image::./images/apm-distributed-tracing.png[Distributed tracing in the APM UI] - -[float] -[[manual-distributed-tracing]] -=== Manual distributed tracing - -Elastic agents automatically propagate distributed tracing context for supported technologies. -If your service communicates over a different, unsupported protocol, -you can manually propagate distributed tracing context from a sending service to a receiving service -with each agent's API. - -[float] -[[distributed-tracing-outgoing]] -==== Add the `traceparent` header to outgoing requests - -Sending services must add the `traceparent` header to outgoing requests. - --- -include::{tab-widget-dir}/distributed-trace-send-widget.asciidoc[] --- - -[float] -[[distributed-tracing-incoming]] -==== Parse the `traceparent` header on incoming requests - -Receiving services must parse the incoming `traceparent` header, -and start a new transaction or span as a child of the received context. - --- -include::{tab-widget-dir}/distributed-trace-receive-widget.asciidoc[] --- - -[float] -[[distributed-tracing-rum]] -=== Distributed tracing with RUM - -Some additional setup may be required to correlate requests correctly with the Real User Monitoring (RUM) agent. - -See the {apm-rum-ref}/distributed-tracing-guide.html[RUM distributed tracing guide] -for information on enabling cross-origin requests, setting up server configuration, -and working with dynamically-generated HTML. diff --git a/docs/apm-mutating-webhook.asciidoc b/docs/apm-mutating-webhook.asciidoc deleted file mode 100644 index 53759ba1012..00000000000 --- a/docs/apm-mutating-webhook.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[apm-mutating-admission-webhook]] -= APM Attacher - -preview::[] - -The {apm-attacher-ref}/apm-attacher.html[APM attacher] for Kubernetes simplifies the instrumentation and configuration of your application pods. -The attacher includes a webhook receiver that modifies pods so they are automatically instrumented by an Elastic APM agent. - -Ready to get started? See {apm-attacher-ref}/apm-get-started-webhook.html[Instrument and configure pods] to get started. diff --git a/docs/apm-overview.asciidoc b/docs/apm-overview.asciidoc deleted file mode 100644 index 3ffa3ebb380..00000000000 --- a/docs/apm-overview.asciidoc +++ /dev/null @@ -1,26 +0,0 @@ -[[apm-overview]] -== Free and open application performance monitoring - -++++ -What is APM? -++++ - -Elastic APM is an application performance monitoring system built on the {stack}. -It allows you to monitor software services and applications in real-time, by -collecting detailed performance information on response time for incoming requests, -database queries, calls to caches, external HTTP requests, and more. -This makes it easy to pinpoint and fix performance problems quickly. - -Elastic APM also automatically collects unhandled errors and exceptions. -Errors are grouped based primarily on the stack trace, -so you can identify new errors as they appear and keep an eye on how many times specific errors happen. - -Metrics are another vital source of information when debugging production systems. -Elastic APM agents automatically pick up basic host-level metrics and agent-specific metrics, -like JVM metrics in the Java Agent, and Go runtime metrics in the Go Agent. - -[float] -=== Give Elastic APM a try - -Use the <> to quickly spin up an APM deployment. -Want to host everything yourself instead? See <>. \ No newline at end of file diff --git a/docs/apm-quick-start.asciidoc b/docs/apm-quick-start.asciidoc deleted file mode 100644 index a36092daa35..00000000000 --- a/docs/apm-quick-start.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -[[apm-quick-start]] -== Quick start with {ecloud} - -The easiest way to get started with Elastic APM is by using our -{ess-product}[hosted {es} Service] on {ecloud}. -The {es} Service is available on AWS, GCP, and Azure. -The {es} Service provisions the following components of the {stack}: - -* *{es}* -- A highly scalable free and open full-text search and analytics engine. -* *{kib}* -- An analytics and visualization platform designed to work with {es}. -* *Integrations Server* -- A combined *APM Server* and *Fleet-managed {agent}*. -** *APM Server* -- An application that receives, processes, and validates performance data from your APM agents. -** *Fleet-managed {agent}* -- A server that runs Fleet Server and provides a control plane for easily configuring and updating APM and other integrations. - -Don't worry--in order to get started, -you don't need to understand how all of these pieces work together! -When you use our hosted {es} Service, -simply spin-up your instance and point your *APM agents* towards it. - -[float] -== What will I learn in this guide? - -include::{obs-repo-dir}/observability/traces-get-started.asciidoc[tag=apm-quick-start] diff --git a/docs/apm-response-codes.asciidoc b/docs/apm-response-codes.asciidoc deleted file mode 100644 index fdbbfc31e0c..00000000000 --- a/docs/apm-response-codes.asciidoc +++ /dev/null @@ -1,43 +0,0 @@ -[[common-response-codes]] -=== APM Server response codes - -[[bad-request]] -[float] -==== HTTP 400: Data decoding error / Data validation error - -The most likely cause for this error is using incompatible versions of {apm-agent} and APM Server. -See the <> to verify compatibility. - -[[event-too-large]] -[float] -==== HTTP 400: Event too large - -APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <> -setting in the APM integration, and adjusting relevant settings in the agent. - -[[unauthorized]] -[float] -==== HTTP 401: Invalid token - -Either the <> in the request header doesn't match the secret token configured in the APM integration, -or the <> is invalid. - -[[forbidden]] -[float] -==== HTTP 403: Forbidden request - -Either you are sending requests to a <> endpoint without RUM enabled, or a request -is coming from an origin not specified in the APM integration settings. -See the <> setting for more information. - -[[request-timed-out]] -[float] -==== HTTP 503: Request timed out waiting to be processed - -This happens when APM Server exceeds the maximum number of requests that it can process concurrently. -To alleviate this problem, you can try to: reduce the sample rate and/or reduce the collected stack trace information. -See <> for more information. - -Another option is to increase processing power. -This can be done by either migrating your {agent} to a more powerful machine -or adding more APM Server instances. \ No newline at end of file diff --git a/docs/apm-rum.asciidoc b/docs/apm-rum.asciidoc deleted file mode 100644 index 7ca2368f49c..00000000000 --- a/docs/apm-rum.asciidoc +++ /dev/null @@ -1,12 +0,0 @@ -[[apm-rum]] -=== Real User Monitoring (RUM) -Real User Monitoring captures user interaction with clients such as web browsers. -The {apm-rum-ref-v}[JavaScript Agent] is Elastic’s RUM Agent. -// To use it you need to {apm-server-ref-v}/configuration-rum.html[enable RUM support] in the APM Server. - -Unlike Elastic APM backend agents which monitor requests and responses, -the RUM JavaScript agent monitors the real user experience and interaction within your client-side application. -The RUM JavaScript agent is also framework-agnostic, which means it can be used with any front-end JavaScript application. - -You will be able to measure metrics such as "Time to First Byte", `domInteractive`, -and `domComplete` which helps you discover performance issues within your client-side application as well as issues that relate to the latency of your server-side application. diff --git a/docs/apm-server-down.asciidoc b/docs/apm-server-down.asciidoc deleted file mode 100644 index 89c89999a7d..00000000000 --- a/docs/apm-server-down.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -[[server-es-down]] -=== What happens when APM Server or {es} is down? - -*If {es} is down* - -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. - -*If APM Server is down* - -Some agents have internal queues or buffers that will temporarily store data if the APM Server goes down. -As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server goes down. -Adjusting these queues/buffers can increase the agent's overhead, so use caution when updating default values. - -* **Go agent** - Circular buffer with configurable size: -{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`]. -// * **iOS agent** - ?? -* **Java agent** - Internal buffer with configurable size: -{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`]. -* **Node.js agent** - No internal queue. Data is lost. -* **PHP agent** - No internal queue. Data is lost. -* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue] -with configurable size and time between flushes. -* **Ruby agent** - Internal queue with configurable size: -{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`]. -* **RUM agent** - No internal queue. Data is lost. -* **.NET agent** - No internal queue. Data is lost. \ No newline at end of file diff --git a/docs/aws-lambda-extension.asciidoc b/docs/aws-lambda-extension.asciidoc deleted file mode 100644 index 53457f38e63..00000000000 --- a/docs/aws-lambda-extension.asciidoc +++ /dev/null @@ -1,14 +0,0 @@ -[[monitoring-aws-lambda]] -= Monitoring AWS Lambda Functions - -Elastic APM lets you monitor your AWS Lambda functions. -The natural integration of <> into your AWS Lambda functions provides insights into the functions' execution and runtime behavior as well as their relationships and dependencies to other services. - -To get started with the setup of Elastic APM for your Lambda functions, checkout the language-specific guides: - -* {apm-node-ref}/lambda.html[Quick Start with APM on AWS Lambda - Node.js] -* {apm-py-ref}/lambda-support.html[Quick Start with APM on AWS Lambda - Python] -* {apm-java-ref}/aws-lambda.html[Quick Start with APM on AWS Lambda - Java] - -Or, see the {apm-lambda-ref}/aws-lambda-arch.html[architecture guide] to learn more about how the extension works, -performance impacts, and more. diff --git a/docs/command-reference.asciidoc b/docs/command-reference.asciidoc deleted file mode 100644 index f3c030ffb4c..00000000000 --- a/docs/command-reference.asciidoc +++ /dev/null @@ -1,1065 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/command-reference.asciidoc[] -////////////////////////////////////////////////////////////////////////// - - -// These attributes are used to resolve short descriptions -// tag::attributes[] - -:global-flags: Also see <>. - -:deploy-command-short-desc: Deploys the specified function to your serverless environment - -:apikey-command-short-desc: Manage API Keys for communication between APM agents and server. - -ifndef::serverless[] -ifndef::no_dashboards[] -:export-command-short-desc: Exports the configuration, index template, {ilm-init} policy, or a dashboard to stdout -endif::no_dashboards[] - -ifdef::no_dashboards[] -:export-command-short-desc: Exports the configuration, index template, or {ilm-init} policy to stdout -endif::no_dashboards[] -endif::serverless[] - -ifdef::serverless[] -:export-command-short-desc: Exports the configuration, index template, or {cloudformation-ref} template to stdout -endif::serverless[] - -:help-command-short-desc: Shows help for any command -:keystore-command-short-desc: Manages the <> -:modules-command-short-desc: Manages configured modules -:package-command-short-desc: Packages the configuration and executable into a zip file -:remove-command-short-desc: Removes the specified function from your serverless environment -:run-command-short-desc: Runs {beatname_uc}. This command is used by default if you start {beatname_uc} without specifying a command - -ifdef::has_ml_jobs[] -:setup-command-short-desc: Sets up the initial environment, including the index template, {ilm-init} policy and write alias, {kib} dashboards (when available), and {ml} jobs (when available) -endif::[] - -ifdef::no_dashboards[] -:setup-command-short-desc: Sets up the initial environment, including the ES index template, and {ilm-init} policy and write alias -endif::no_dashboards[] - -ifndef::has_ml_jobs,no_dashboards[] -:setup-command-short-desc: Sets up the initial environment, including the index template, {ilm-init} policy and write alias, and {kib} dashboards (when available) -endif::[] - -:update-command-short-desc: Updates the specified function -:test-command-short-desc: Tests the configuration -:version-command-short-desc: Shows information about the current version - -// end::attributes[] - -[[command-line-options]] -=== {beatname_uc} command reference - -++++ -Command reference -++++ - -IMPORTANT: These commands only apply to the APM Server binary installation method. - -ifndef::no_dashboards[] -{beatname_uc} provides a command-line interface for starting {beatname_uc} and -performing common tasks, like testing configuration files and loading dashboards. -endif::no_dashboards[] - -ifdef::no_dashboards[] -{beatname_uc} provides a command-line interface for starting {beatname_uc} and -performing common tasks, like testing configuration files. -endif::no_dashboards[] - -The command-line also supports <> -for controlling global behaviors. - -ifeval::["{beatname_lc}"!="winlogbeat"] -[TIP] -========================= -Use `sudo` to run the following commands if: - -* the config file is owned by `root`, or -* {beatname_uc} is configured to capture data that requires `root` access - -========================= -endif::[] - -Some of the features described here require an Elastic license. For -more information, see https://www.elastic.co/subscriptions and -{kibana-ref}/managing-licenses.html[License Management]. - - -[options="header"] -|======================= -|Commands | -ifeval::["{beatname_lc}"=="functionbeat"] -|<> | {deploy-command-short-desc}. -endif::[] -ifdef::apm-server[] -|<> |{apikey-command-short-desc}. -endif::[] -|<> |{export-command-short-desc}. -|<> |{help-command-short-desc}. -ifndef::serverless[] -|<> |{keystore-command-short-desc}. -endif::[] -ifeval::["{beatname_lc}"=="functionbeat"] -|<> |{package-command-short-desc}. -|<> |{remove-command-short-desc}. -endif::[] -ifdef::has_modules_command[] -|<> |{modules-command-short-desc}. -endif::[] -ifndef::serverless[] -|<> |{run-command-short-desc}. -endif::[] -ifndef::apm-server[] -|<> |{setup-command-short-desc}. -endif::apm-server[] -|<> |{test-command-short-desc}. -ifeval::["{beatname_lc}"=="functionbeat"] -|<> |{update-command-short-desc}. -endif::[] -|<> |{version-command-short-desc}. -|======================= - -Also see <>. - -ifdef::apm-server[] -[float] -[[apikey-command]] -==== `apikey` command - -experimental::[] - -deprecated::[8.6.0, Users should create API Keys through {kib} or the {es} REST API. See <>.] - -Communication between APM agents and APM Server now supports sending an -<>. -APM Server provides an `apikey` command that can create, verify, invalidate, -and show information about API Keys for agent/server communication. -Most operations require the `manage_own_api_key` cluster privilege, -and you must ensure that `apm-server.api_key` or `output.elasticsearch` are configured appropriately. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} apikey SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -// tag::apikey-subcommands[] -*`create`*:: -Create an API Key with the specified privilege(s). No required flags. -+ -The user requesting to create an API Key needs to have APM privileges used by the APM Server. -A superuser, by default, has these privileges. For other users, -you can create them. See <> for required privileges. - -*`info`*:: -Query API Key(s). `--id` or `--name` required. - -*`invalidate`*:: -Invalidate API Key(s). `--id` or `--name` required. - -*`verify`*:: -Check if a credentials string has the given privilege(s). - `--credentials` required. -// end::apikey-subcommands[] - -*FLAGS* - -*`--agent-config`*:: -Required for agents to read configuration remotely. Valid with the `create` and `verify` subcommands. -When used with `create`, gives the `config_agent:read` privilege to the created key. -When used with `verify`, asks for the `config_agent:read` privilege. - -*`--credentials CREDS`*:: -Required for the `verify` subcommand. Specifies the credentials for which to check privileges. -Credentials are the base64 encoded representation of the API key's `id:api_key`. - -*`--expiration TIME`*:: -When used with `create`, specifies the expiration for the key, e.g., "1d" (default never). - -*`--id ID`*:: -ID of the API key. Valid with the `info` and `invalidate` subcommands. -When used with `info`, queries the specified ID. -When used with `invalidate`, deletes the specified ID. - -*`--ingest`*:: -Required for ingesting events. Valid with the `create` and `verify` subcommands. -When used with `create`, gives the `event:write` privilege to the created key. -When used with `verify`, asks for the `event:write` privilege. - -*`--json`*:: -Prints the output of the command as JSON. -Valid with all `apikey` subcommands. - -*`--name NAME`*:: -Name of the API key(s). Valid with the `create`, `info`, and `invalidate` subcommands. -When used with `create`, specifies the name of the API key to be created (default: "apm-key"). -When used with `info`, specifies the API key to query (multiple matches are possible). -When used with `invalidate`, specifies the API key to delete (multiple matches are possible). - -*`--sourcemap`*:: -Required for uploading source maps. Valid with the `create` and `verify` subcommands. -When used with `create`, gives the `sourcemap:write` privilege to the created key. -When used with `verify`, asks for the `sourcemap:write` privilege. - -*`--valid-only`*:: -When used with `info`, only returns valid API Keys (not expired or invalidated). - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} apikey create --ingest --agent-config --name example-001 -{beatname_lc} apikey info --name example-001 --valid-only -{beatname_lc} apikey invalidate --name example-001 ------ - -For more information, see <>. - -endif::[] - -ifeval::["{beatname_lc}"=="functionbeat"] -[[deploy-command]] -==== `deploy` command - -{deploy-command-short-desc}. Before deploying functions, make sure the user has -the credentials required by your cloud service provider. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} deploy FUNCTION_NAME [FLAGS] ----- - -*`FUNCTION_NAME`*:: -Specifies the name of the function to deploy. - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `deploy` command. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} deploy cloudwatch -{beatname_lc} deploy sqs ------ -endif::[] - -[float] -[[export-command]] -==== `export` command - -ifndef::serverless[] -ifndef::no_dashboards[] -{export-command-short-desc}. You can use this -command to quickly view your configuration, see the contents of the index -template and the {ilm-init} policy, or export a dashboard from {kib}. -endif::no_dashboards[] - -ifdef::no_dashboards[] -{export-command-short-desc}. You can use this -command to quickly view your configuration or see the contents of the index -template or the {ilm-init} policy. -endif::no_dashboards[] -endif::serverless[] - -ifdef::serverless[] -{export-command-short-desc}. You can use this -command to quickly view your configuration, see the contents of the index -template and the {ilm-init} policy, or export an CloudFormation template. -endif::serverless[] - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} export SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`config`*:: -Exports the current configuration to stdout. If you use the `-c` flag, this -command exports the configuration that's defined in the specified file. - -ifndef::no_dashboards[] -[[dashboard-subcommand]]*`dashboard`*:: -Exports a dashboard. You can use this option to store a dashboard on disk in a -module and load it automatically. For example, to export the dashboard to a JSON -file, run: -+ -["source","shell",subs="attributes"] ----- -{beatname_lc} export dashboard --id="DASHBOARD_ID" > dashboard.json ----- -+ -To find the `DASHBOARD_ID`, look at the URL for the dashboard in {kib}. By -default, `export dashboard` writes the dashboard to stdout. The example shows -how to write the dashboard to a JSON file so that you can import it later. The -JSON file will contain the dashboard with all visualizations and searches. You -must load the index pattern separately for {beatname_uc}. -+ -To load the dashboard, copy the generated `dashboard.json` file into the -`kibana/6/dashboard` directory of {beatname_uc}, and run -+{beatname_lc} setup --dashboards+ to import the dashboard. -+ -If {kib} is not running on `localhost:5061`, you must also adjust the -{beatname_uc} configuration under `setup.kibana`. -endif::no_dashboards[] - -[[template-subcommand]]*`template`*:: -Exports the index template to stdout. You can specify the `--es.version` and -`--index` flags to further define what gets exported. Furthermore you can export -the template to a file instead of `stdout` by defining a directory via `--dir`. - -[[ilm-policy-subcommand]] -*`ilm-policy`*:: -Exports the {ilm} policy to stdout. You can specify the -`--es.version` and a `--dir` to which the policy should be exported as a -file rather than exporting to `stdout`. - -ifdef::serverless[] -[[function-subcommand]]*`function` FUNCTION_NAME*:: -Exports an {cloudformation-ref} template to stdout. -endif::serverless[] - -*FLAGS* - -*`--es.version VERSION`*:: -When used with <>, exports an index -template that is compatible with the specified version. -When used with <>, exports the {ilm-init} policy -if the specified ES version is enabled for {ilm-init}. - -*`-h, --help`*:: -Shows help for the `export` command. - -*`--index BASE_NAME`*:: -When used with <>, sets the base name to use for -the index template. If this flag is not specified, the default base name is -+{beatname_lc}+. - -*`--dir DIRNAME`*:: -Define a directory to which the template and {ilm-init} policy should be exported to -as files instead of printing them to `stdout`. - -ifndef::no_dashboards[] -*`--id DASHBOARD_ID`*:: -When used with <>, specifies the dashboard ID. -endif::no_dashboards[] - -{global-flags} - -*EXAMPLES* - -ifndef::serverless[] -ifndef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} export config -{beatname_lc} export template --es.version {version} --index myindexname -{beatname_lc} export dashboard --id="a7b35890-8baa-11e8-9676-ef67484126fb" > dashboard.json ------ -endif::no_dashboards[] - -ifdef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} export config -{beatname_lc} export template --es.version {version} --index myindexname ------ -endif::no_dashboards[] -endif::serverless[] - -ifdef::serverless[] -["source","sh",subs="attributes"] ------ -{beatname_lc} export config -{beatname_lc} export template --es.version {version} --index myindexname -{beatname_lc} export function cloudwatch ------ -endif::serverless[] - -[float] -[[help-command]] -==== `help` command - -{help-command-short-desc}. -ifndef::serverless[] -If no command is specified, shows help for the `run` command. -endif::[] - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} help COMMAND_NAME [FLAGS] ----- - - -*`COMMAND_NAME`*:: -Specifies the name of the command to show help for. - -*FLAGS* - -*`-h, --help`*:: Shows help for the `help` command. - -{global-flags} - -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} help export ------ - -ifndef::serverless[] -[float] -[[keystore-command]] -==== `keystore` command - -{keystore-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} keystore SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`add KEY`*:: -Adds the specified key to the keystore. Use the `--force` flag to overwrite an -existing key. Use the `--stdin` flag to pass the value through `stdin`. - -*`create`*:: -Creates a keystore to hold secrets. Use the `--force` flag to overwrite the -existing keystore. - -*`list`*:: -Lists the keys in the keystore. - -*`remove KEY`*:: -Removes the specified key from the keystore. - -*FLAGS* - -*`--force`*:: -Valid with the `add` and `create` subcommands. When used with `add`, overwrites -the specified key. When used with `create`, overwrites the keystore. - -*`--stdin`*:: -When used with `add`, uses the stdin as the source of the key's value. - -*`-h, --help`*:: -Shows help for the `keystore` command. - - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} keystore create -{beatname_lc} keystore add ES_PWD -{beatname_lc} keystore remove ES_PWD -{beatname_lc} keystore list ------ - -See <> for more examples. - -endif::[] - -ifeval::["{beatname_lc}"=="functionbeat"] -[float] -[[package-command]] -==== `package` command - -{package-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} package [FLAGS] ----- - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `package` command. - -*`-o, --output`*:: -Specifies the full path pattern to use when creating the packages. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} package --output /path/to/folder/package-{{.Provider}}.zip ------ - -[[remove-command]] -==== `remove` command - -{remove-command-short-desc}. Before removing functions, make sure the user has -the credentials required by your cloud service provider. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} remove FUNCTION_NAME [FLAGS] ----- - -*`FUNCTION_NAME`*:: -Specifies the name of the function to remove. - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `remove` command. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} remove cloudwatch -{beatname_lc} remove sqs ------ -endif::[] - -ifdef::has_modules_command[] -[[modules-command]] -==== `modules` command - -{modules-command-short-desc}. You can use this command to enable and disable -specific module configurations defined in the `modules.d` directory. The -changes you make with this command are persisted and used for subsequent -runs of {beatname_uc}. - -To see which modules are enabled and disabled, run the `list` subcommand. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} modules SUBCOMMAND [FLAGS] ----- - - -*`SUBCOMMAND`* - -*`disable MODULE_LIST`*:: -Disables the modules specified in the space-separated list. - -*`enable MODULE_LIST`*:: -Enables the modules specified in the space-separated list. - -*`list`*:: -Lists the modules that are currently enabled and disabled. - - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `export` command. - - -{global-flags} - -*EXAMPLES* - -ifeval::["{beatname_lc}"=="filebeat"] -["source","sh",subs="attributes"] ------ -{beatname_lc} modules list -{beatname_lc} modules enable apache2 auditd mysql ------ -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -["source","sh",subs="attributes"] ------ -{beatname_lc} modules list -{beatname_lc} modules enable apache nginx system ------ -endif::[] -endif::[] - -ifndef::serverless[] -[float] -[[run-command]] -==== `run` command - -{run-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ------ -{beatname_lc} run [FLAGS] ------ - -Or: - -["source","sh",subs="attributes"] ------ -{beatname_lc} [FLAGS] ------ - -*FLAGS* - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-I, --I FILE`*:: -Reads packet data from the specified file instead of reading packets from the -network. This option is useful only for testing {beatname_uc}. -+ -["source","sh",subs="attributes"] ------ -{beatname_lc} run -I ~/pcaps/network_traffic.pcap ------ -endif::[] - -*`-N, --N`*:: Disables publishing for testing purposes. - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-O, --O`*:: -Read packets one by one by pressing _Enter_ after each. This option is useful -only for testing {beatname_uc}. -endif::[] - -*`--cpuprofile FILE`*:: -Writes CPU profile data to the specified file. This option is useful for -troubleshooting {beatname_uc}. - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-devices`*:: -Prints the list of devices that are available for sniffing and then exits. -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-dump FILE`*:: -Writes all captured packets to the specified file. This option is useful for -troubleshooting {beatname_uc}. -endif::[] - -*`-h, --help`*:: -Shows help for the `run` command. - -*`--httpprof [HOST]:PORT`*:: -Starts an HTTP server for profiling. This option is useful for troubleshooting -and profiling {beatname_uc}. - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-l N`*:: -Reads the pcap file `N` number of times. The default is 1. Use this option in -combination with the `-I` option. For an infinite loop, use _0_. The `-l` -option is useful only for testing {beatname_uc}. -endif::[] - -*`--memprofile FILE`*:: -Writes memory profile data to the specified output file. This option is useful -for troubleshooting {beatname_uc}. - -ifeval::["{beatname_lc}"=="filebeat"] -*`--modules MODULE_LIST`*:: -Specifies a comma-separated list of modules to run. For example: -+ -["source","sh",subs="attributes"] ------ -{beatname_lc} run --modules nginx,mysql,system ------ -+ -Rather than specifying the list of modules every time you run {beatname_uc}, -you can use the <> command to enable and disable -specific modules. Then when you run {beatname_uc}, it will run any modules -that are enabled. -endif::[] - -ifeval::["{beatname_lc}"=="filebeat"] -*`--once`*:: -When the `--once` flag is used, {beatname_uc} starts all configured harvesters -and inputs, and runs each input until the harvesters are closed. If you set the -`--once` flag, you should also set `close_eof` so the harvester is closed when -the end of the file is reached. By default harvesters are closed after -`close_inactive` is reached. -endif::[] - -*`--system.hostfs MOUNT_POINT`*:: - -Specifies the mount point of the host's file system for use in monitoring a host. - - -ifeval::["{beatname_lc}"=="packetbeat"] -*`-t`*:: -Reads packets from the pcap file as fast as possible without sleeping. Use this -option in combination with the `-I` option. The `-t` option is useful only for -testing Packetbeat. -endif::[] - -{global-flags} - -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} run -e ------ - -Or: - -["source","sh",subs="attributes"] ------ -{beatname_lc} -e ------ -endif::[] - -ifndef::apm-server[] -[float] -[[setup-command]] -==== `setup` command - -{setup-command-short-desc} - -* The index template ensures that fields are mapped correctly in {es}. -If {ilm} is enabled it also ensures that the defined {ilm-init} policy -and write alias are connected to the indices matching the index template. -The {ilm-init} policy takes care of the lifecycle of an index, when to do a rollover, -when to move an index from the hot phase to the next phase, etc. - -ifndef::no_dashboards[] -* The {kib} dashboards make it easier for you to visualize {beatname_uc} data -in {kib}. -endif::no_dashboards[] - -ifdef::has_ml_jobs[] -* The {ml} jobs contain the configuration information and metadata -necessary to analyze data for anomalies. -endif::[] - -This command sets up the environment without actually running -{beatname_uc} and ingesting data. - -*SYNOPSIS* - -// tag::setup-command-tag[] -["source","sh",subs="attributes"] ----- -{beatname_lc} setup [FLAGS] ----- - - -*FLAGS* - -ifndef::no_dashboards[] -*`--dashboards`*:: -Sets up the {kib} dashboards (when available). This option loads the dashboards -from the {beatname_uc} package. For more options, such as loading customized -dashboards, see {beatsdevguide}/import-dashboards.html[Importing Existing Beat -Dashboards] in the _{beats} Developer Guide_. -endif::no_dashboards[] - -*`-h, --help`*:: -Shows help for the `setup` command. - -ifeval::["{beatname_lc}"=="filebeat"] -*`--modules MODULE_LIST`*:: -Specifies a comma-separated list of modules. Use this flag to avoid errors when -there are no modules defined in the +{beatname_lc}.yml+ file. - -*`--pipelines`*:: -Sets up ingest pipelines for configured filesets. {beatname_uc} looks for -enabled modules in the +{beatname_lc}.yml+ file. If you used the -<> command to enable modules in the `modules.d` -directory, also specify the `--modules` flag. -endif::[] - -*`--index-management`*:: -Sets up components related to {es} index management including -template, {ilm-init} policy, and write alias (if supported and configured). - -ifdef::apm-server[] -*`--pipelines`*:: -Registers the <> definitions set in `ingest/pipeline/definition.json`. -endif::apm-server[] - -*`--template`*:: -deprecated:[7.2] -Sets up the index template only. -It is recommended to use `--index-management` instead. - -*`--ilm-policy`*:: -deprecated:[7.2] -Sets up the {ilm} policy. -It is recommended to use `--index-management` instead. - -{global-flags} - -*EXAMPLES* - -ifeval::["{beatname_lc}"=="filebeat"] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --dashboards -{beatname_lc} setup --pipelines -{beatname_lc} setup --pipelines --modules system,nginx,mysql <1> -{beatname_lc} setup --index-management ------ -<1> If you used the <> command to enable modules in -the `modules.d` directory, also specify the `--modules` flag to indicate which -modules to load pipelines for. -endif::[] - -ifeval::["{beatname_lc}"!="filebeat"] - -ifndef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --dashboards -{beatname_lc} setup --index-management ------ -endif::no_dashboards[] - -ifndef::apm-server[] -ifdef::no_dashboards[] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --index-management ------ -endif::no_dashboards[] -endif::apm-server[] - -ifdef::apm-server[] -["source","sh",subs="attributes"] ------ -{beatname_lc} setup --index-management -{beatname_lc} setup --pipelines ------ -endif::apm-server[] - -endif::[] -// end::setup-command-tag[] - -endif::apm-server[] - -[float] -[[test-command]] -==== `test` command - -{test-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} test SUBCOMMAND [FLAGS] ----- - -*`SUBCOMMAND`* - -*`config`*:: -Tests the configuration settings. - -ifeval::["{beatname_lc}"=="metricbeat"] -*`modules [MODULE_NAME] [METRICSET_NAME]`*:: -Tests module settings for all configured modules. When you run this command, -{beatname_uc} does a test run that applies the current settings, retrieves the -metrics, and shows them as output. To test the settings for a specific module, -specify `MODULE_NAME`. To test the settings for a specific metricset in the -module, also specify `METRICSET_NAME`. -endif::[] - -*`output`*:: -Tests that {beatname_uc} can connect to the output by using the -current settings. - -*FLAGS* - -*`-h, --help`*:: Shows help for the `test` command. - -{global-flags} - -ifeval::["{beatname_lc}"!="metricbeat"] -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} test config ------ -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} test config -{beatname_lc} test modules system cpu ------ -endif::[] - -ifeval::["{beatname_lc}"=="functionbeat"] -[[update-command]] -==== `update` command - -{update-command-short-desc}. Before updating functions, make sure the user has -the credentials required by your cloud service provider. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} update FUNCTION_NAME [FLAGS] ----- - -*`FUNCTION_NAME`*:: -Specifies the name of the function to update. - -*FLAGS* - -*`-h, --help`*:: -Shows help for the `update` command. - -{global-flags} - -*EXAMPLES* - -["source","sh",subs="attributes"] ------ -{beatname_lc} update cloudwatch -{beatname_lc} update sqs ------ -endif::[] - -[float] -[[version-command]] -==== `version` command - -{version-command-short-desc}. - -*SYNOPSIS* - -["source","sh",subs="attributes"] ----- -{beatname_lc} version [FLAGS] ----- - - -*FLAGS* - -*`-h, --help`*:: Shows help for the `version` command. - -{global-flags} - -*EXAMPLE* - -["source","sh",subs="attributes"] ------ -{beatname_lc} version ------ - - -[float] -[[global-flags]] -=== Global flags - -These global flags are available whenever you run {beatname_uc}. - -*`-E, --E "SETTING_NAME=VALUE"`*:: -Overrides a specific configuration setting. You can specify multiple overrides. -For example: -+ -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -{beatname_lc} -E "name=mybeat" -E "output.elasticsearch.hosts=['http://myhost:9200']" ----------------------------------------------------------------------- -+ -This setting is applied to the currently running {beatname_uc} process. -The {beatname_uc} configuration file is not changed. - -ifeval::["{beatname_lc}"=="filebeat"] -*`-M, --M "VAR_NAME=VALUE"`*:: Overrides the default configuration for a -{beatname_uc} module. You can specify multiple variable overrides. For example: -+ -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -{beatname_lc} -modules=nginx -M "nginx.access.var.paths=['/var/log/nginx/access.log*']" -M "nginx.access.var.pipeline=no_plugins" ----------------------------------------------------------------------- -endif::[] - -*`-c, --c FILE`*:: -Specifies the configuration file to use for {beatname_uc}. The file you specify -here is relative to `path.config`. If the `-c` flag is not specified, the -default config file, +{beatname_lc}.yml+, is used. - -*`-d, --d SELECTORS`*:: -Enables debugging for the specified selectors. For the selectors, you can -specify a comma-separated -list of components, or you can use `-d "*"` to enable debugging for all -components. For example, `-d "publisher"` displays all the publisher-related -messages. - -*`-e, --e`*:: -Logs to stderr and disables syslog/file output. - -*`-environment`*:: -For logging purposes, specifies the environment that {beatname_uc} is running in. -This setting is used to select a default log output when no log output is configured. -Supported values are: `systemd`, `container`, `macos_service`, and `windows_service`. -If `systemd` or `container` is specified, {beatname_uc} will log to stdout and stderr -by default. - -*`--path.config`*:: -Sets the path for configuration files. See the <> section for -details. - -*`--path.data`*:: -Sets the path for data files. See the <> section for details. - -*`--path.home`*:: -Sets the path for miscellaneous files. See the <> section for -details. - -*`--path.logs`*:: -Sets the path for log files. See the <> section for details. - -*`--strict.perms`*:: -Sets strict permission checking on configuration files. The default is `-strict.perms=true`. -ifndef::apm-server[] -See {beats-ref}/config-file-permissions.html[Config file ownership and permissions] -for more information. -endif::[] -ifdef::apm-server[] -See <> for more information. -endif::[] - -*`-v, --v`*:: -Logs INFO-level messages. diff --git a/docs/common-problems.asciidoc b/docs/common-problems.asciidoc deleted file mode 100644 index 527833638cd..00000000000 --- a/docs/common-problems.asciidoc +++ /dev/null @@ -1,200 +0,0 @@ -[[common-problems]] -=== Common problems - -This section describes common problems you might encounter when using a Fleet-managed APM Server. - -* <> -* <> -* <> -* <> -* <> -* <> - -[float] -[[no-data-indexed]] -=== No data is indexed - -If no data shows up in {es}, first make sure that your APM components are properly connected. - -include::{tab-widget-dir}/no-data-indexed-widget.asciidoc[] - -[[data-indexed-no-apm]] -[float] -=== Data is indexed but doesn't appear in the APM app - -The {apm-app} relies on index mappings to query and display data. -If your APM data isn't showing up in the {apm-app}, but is elsewhere in {kib}, like the Discover app, -you may have a missing index mapping. - -You can determine if a field was mapped correctly with the `_mapping` API. -For example, run the following command in the {kib} {kibana-ref}/console-kibana.html[console]. -This will display the field data type of the `service.name` field. - -[source,curl] ----- -GET *apm*/_mapping/field/service.name ----- - -If the `mapping.name.type` is `"text"`, your APM indices were not set up correctly. - -[source,yml] ----- -".ds-metrics-apm.transaction.1m-default-2023.04.12-000038": { - "mappings": { - "service.name": { - "full_name": "service.name", - "mapping": { - "name": { - "type": "text" <1> - } - } - } - } -} ----- -<1> The `service.name` `mapping.name.type` would be `"keyword"` if this field had been set up correctly. - -To fix this problem, install the APM integration by following these steps: - --- -include::{docdir}/getting-started-apm-server.asciidoc[tag=install-apm-integration] --- - -This will reinstall the APM index templates and trigger a data stream index rollover. - -You can verify the correct index templates were installed by running the following command in the {kib} console: - -[source,curl] ----- -GET /_index_template/traces-apm ----- - -[float] -[[common-ssl-problems]] -=== Common SSL-related problems - -* <> -* <> -* <> -* <> -* <> - - -[float] -[[ssl-client-fails]] -==== SSL client fails to connect - -The target host might be unreachable or the certificate may not be valid. -To fix this problem: - -. Make sure that the APM Server process on the target host is running and you can connect to it. -Try to ping the target host to verify that you can reach it from the host running APM Server. -Then use either `nc` or `telnet` to make sure that the port is available. For example: -+ -[source,shell] ----- -ping -telnet 5044 ----- - -. Verify that the certificate is valid and that the hostname and IP match. -. Use OpenSSL to test connectivity to the target server and diagnose problems. -See the https://www.openssl.org/docs/manmaster/man1/openssl-s_client.html[OpenSSL documentation] for more info. - -[float] -[[cannot-validate-certificate]] -==== x509: cannot validate certificate for because it doesn't contain any IP SANs - -This happens because your certificate is only valid for the hostname present in the Subject field. -To resolve this problem, try one of these solutions: - -* Create a DNS entry for the hostname, mapping it to the server's IP. -* Create an entry in `/etc/hosts` for the hostname. Or, on Windows, add an entry to -`C:\Windows\System32\drivers\etc\hosts`. -* Re-create the server certificate and add a Subject Alternative Name (SAN) for the IP address of the server. This makes the -server's certificate valid for both the hostname and the IP address. - -[float] -[[getsockopt-no-route-to-host]] -==== getsockopt: no route to host - -This is not an SSL problem. It's a networking problem. Make sure the two hosts can communicate. - -[float] -[[getsockopt-connection-refused]] -==== getsockopt: connection refused - -This is not an SSL problem. Make sure that {ls} is running and that there is no firewall blocking the traffic. - -[float] -[[target-machine-refused-connection]] -==== No connection could be made because the target machine actively refused it - -A firewall is refusing the connection. Check if a firewall is blocking the traffic on the client, the network, or the -destination host. - -[[io-timeout]] -[float] -=== I/O Timeout - -I/O Timeouts can occur when your timeout settings across the stack are not configured correctly, -especially when using a load balancer. - -You may see an error like the one below in the {apm-agent} logs, and/or a similar error on the APM Server side: - -[source,logs] ----- -[ElasticAPM] APM Server responded with an error: -"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout" ----- - -To fix this, ensure timeouts are incrementing from the {apm-agent}, -through your load balancer, to the APM Server. - -By default, the agent timeouts are set at 10 seconds, and the server timeout is set at 3600 seconds. -Your load balancer should be set somewhere between these numbers. - -For example: - -[source,txt] ----- -APM agent --> Load Balancer --> APM Server - 10s 15s 3600s ----- - -The APM Server timeout can be configured by updating the -<>. - -[[field-limit-exceeded]] -[float] -=== Field limit exceeded - -When adding too many distinct tag keys on a transaction or span, -you risk creating a link:{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -For example, you should avoid that user-specified data, -like URL parameters, is used as a tag key. -Likewise, using the current timestamp or a user ID as a tag key is not a good idea. -However, tag *values* with a high cardinality are not a problem. -Just try to keep the number of distinct tag keys at a minimum. - -The symptom of a mapping explosion is that transactions and spans are not indexed anymore after a certain time. Usually, on the next day, -the spans and transactions will be indexed again because a new index is created each day. -But as soon as the field limit is reached, indexing stops again. - -In the agent logs, you won't see a sign of failures as the APM server asynchronously sends the data it received from the agents to {es}. However, the APM server and {es} log a warning like this: - -[source,logs] ----- -{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in [INDEX_NAME] has been exceeded\"} ----- - -[[tail-based-sampling-memory-disk-io]] -[float] -=== Tail-based sampling causing high system memory usage and high disk IO - -Tail-based sampling requires minimal memory to run, and there should not be a noticeable increase in RSS memory usage. -However, since tail-based sampling writes data to disk, -it is possible to see a significant increase in OS page cache memory usage due to disk IO. -If you see a drop in throughput and excessive disk activity after enabling tail-based sampling, -please ensure that there is enough memory headroom in the system for OS page cache to perform disk IO efficiently. diff --git a/docs/config-ownership.asciidoc b/docs/config-ownership.asciidoc deleted file mode 100644 index ebd3ccfcb96..00000000000 --- a/docs/config-ownership.asciidoc +++ /dev/null @@ -1,44 +0,0 @@ -[float] -[[config-file-ownership]] -==== Configuration file ownership - -On systems with POSIX file permissions, -the {beatname_uc} configuration file is subject to ownership and file permission checks. -These checks prevent unauthorized users from providing or modifying configurations that are run by {beatname_uc}. - -When installed via an RPM or DEB package, -the configuration file at +/etc/{beatname_lc}/{beatname_lc}.yml+ will be owned by +{beatname_lc}+, -and have file permissions of `0600` (`-rw-------`). - -{beatname_uc} will only start if the configuration file is owned by the user running the process, -or by running as root with configuration ownership set to `root:root` - -You may encounter the following errors if your configuration file fails these checks: - -["source", "systemd", subs="attributes"] ------ -Exiting: error loading config file: config file ("/etc/{beatname_lc}/{beatname_lc}.yml") -must be owned by the user identifier (uid=1000) or root ------ - -To correct this problem you can change the ownership of the configuration file with: -+chown {beatname_lc}:{beatname_lc} /etc/{beatname_lc}/{beatname_lc}.yml+. - -You can also make root the config owner, although this is not recommended: -+sudo chown root:root /etc/{beatname_lc}/{beatname_lc}.yml+. - -["source", "systemd", subs="attributes"] ------ -Exiting: error loading config file: config file ("/etc/{beatname_lc}/{beatname_lc}.yml") -can only be writable by the owner but the permissions are "-rw-rw-r--" -(to fix the permissions use: 'chmod go-w /etc/{beatname_lc}/{beatname_lc}.yml') ------ - -To correct this problem, use +chmod go-w /etc/{beatname_lc}/{beatname_lc}.yml+ to -remove write privileges from anyone other than the owner. - -[float] -===== Disabling strict permission checks - -You can disable strict permission checks from the command line by using -`--strict.perms=false`, but we strongly encourage you to leave the checks enabled. diff --git a/docs/configure/agent-config.asciidoc b/docs/configure/agent-config.asciidoc deleted file mode 100644 index 6261a389827..00000000000 --- a/docs/configure/agent-config.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -[[configure-agent-config]] -= Configure APM agent configuration - -++++ -APM agent configuration -++++ - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -APM agent configuration is supported by all APM Server deployment methods. -**** - -APM agent configuration allows you to fine-tune your APM agents from within the APM app. -Changes are automatically propagated to your APM agents, so there's no need to redeploy your applications. - -To learn more about this feature, see {kibana-ref}/agent-configuration.html[APM agent configuration]. - -Here's a sample configuration: - -[source,yaml] ----- -apm-server.agent.config.cache.expiration: 45s -apm-server.agent.config.elasticsearch.api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA ----- - -[float] -= APM agent configuration options - -The following options are only supported for APM Server binary users. -You can specify these options in the `apm-server.agent.config` section of the -+{beatname_lc}.yml+ config file: - -[float] -[[agent-config-cache]] -== `apm-server.agent.config.cache.expiration` - -When using APM agent configuration, information fetched from {es} will be cached in memory for some time. -Specify the cache expiration time via this setting. Defaults to `30s` (30 seconds). - -[float] -[[agent-config-elasticsearch]] -== `apm-server.agent.config.elasticsearch` - -Takes the same options as <>. - -For APM Server binary users and Elastic Agent standalone-managed APM Server, -APM agent configuration is automatically fetched from {es} using the `output.elasticsearch` -configuration. If `output.elasticsearch` isn't set or doesn't have sufficient privileges, -use these {es} options to provide {es} access. - -[float] -== Common problems - -You may see either of the following HTTP 403 errors from APM Server when it attempts to fetch APM agent configuration: - -APM agent log: - -[source,log] ----- -"Your Elasticsearch configuration does not support agent config queries. Check your configurations at `output.elasticsearch` or `apm-server.agent.config.elasticsearch`." ----- - -APM Server log: - -[source,log] ----- -rejecting fetch request: no valid elasticsearch config ----- - -This occurs because the user or API key set in either `apm-server.agent.config.elasticsearch` or `output.elasticsearch` -(if `apm-server.agent.config.elasticsearch` is not set) does not have adequate permissions to read source maps from {es}. - -To fix this error, ensure that {beatname_uc} has all the required privileges. See <> for more details. diff --git a/docs/configure/anonymous-auth.asciidoc b/docs/configure/anonymous-auth.asciidoc deleted file mode 100644 index ec67a8571f3..00000000000 --- a/docs/configure/anonymous-auth.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -[[configuration-anonymous]] -= Configure anonymous authentication - -++++ -Anonymous authentication -++++ - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -Most options on this page are supported by all APM Server deployment methods. -**** - -Elastic APM agents can send unauthenticated (anonymous) events to the APM Server. -An event is considered to be anonymous if no authentication token can be extracted from the incoming request. -This is useful for agents that run on clients, like the Real User Monitoring (RUM) -agent running in a browser, or the iOS/Swift agent running in a user application. - -Enable anonymous authentication in the APM Server to allow the -ingestion of unauthenticated client-side APM data while still requiring authentication for server-side services. - -include::./tab-widgets/anon-auth-widget.asciidoc[] - - - -IMPORTANT: All anonymous access configuration is ignored if -<> is disabled. - -[float] -[[config-auth-anon-rum]] -= Real User Monitoring (RUM) - -If an <> or <> is configured, -then anonymous authentication must be enabled to collect RUM data. -For this reason, anonymous auth will be enabled automatically if <> -is set to `true`, and <> is not explicitly defined. - -See <> for additional RUM configuration options. - -[float] -[[config-auth-anon-mitigating]] -== Mitigating malicious requests - -There are a few configuration variables that can mitigate the impact of malicious requests to an -unauthenticated APM Server endpoint. - -Use the <> and <> configs to ensure that the -`agent.name` and `service.name` of each incoming request match a specified list. - -Additionally, the APM Server can rate-limit unauthenticated requests based on the client IP address -(`client.ip`) of the request with <>. -This allows you to specify the maximum number of requests allowed per unique IP address, per second. - -[float] -[[config-auth-anon-client-ip]] -== Deriving an incoming request's `client.ip` address - -The remote IP address of an incoming request might be different -from the end-user's actual IP address, for example, because of a proxy. For this reason, -the APM Server attempts to derive the IP address of an incoming request from HTTP headers. -The supported headers are parsed in the following order: - -1. `Forwarded` -2. `X-Real-Ip` -3. `X-Forwarded-For` - -If none of these headers are present, the remote address for the incoming request is used. - -[float] -[[config-auth-anon-client-ip-concerns]] -== Using a reverse proxy or load balancer - -HTTP headers are easily modified; -it's possible for anyone to spoof the derived `client.ip` value by changing or setting, -for example, the value of the `X-Forwarded-For` header. -For this reason, if any of your clients are not trusted, -we recommend setting up a reverse proxy or load balancer in front of the APM Server. - -Using a proxy allows you to clear any existing IP-forwarding HTTP headers, -and replace them with one set by the proxy. -This prevents malicious users from cycling spoofed IP addresses to bypass the -APM Server's rate limiting feature. - -[float] -[[config-auth-anon]] -= Configuration reference - -[float] -[[config-auth-anon-enabled]] -== Anonymous Agent access - -Enable or disable anonymous authentication. -Default: `false` (disabled). (bool) - -|==== -| APM Server binary | `apm-server.auth.anonymous.enabled` -| Fleet-managed | `Anonymous Agent access` -|==== - -[float] -[[config-auth-anon-allow-agent]] -== Allowed anonymous agents -A list of permitted {apm-agent} names for anonymous authentication. -Names in this list must match the agent's `agent.name`. -Default: `[rum-js, js-base]` (only RUM agent events are accepted). (array) - -|==== -| APM Server binary | `apm-server.auth.anonymous.allow_agent` -| Fleet-managed | `Allowed Anonymous agents` -|==== - -[float] -[[config-auth-anon-allow-service]] -== Allowed services -A list of permitted service names for anonymous authentication. -Names in this list must match the agent's `service.name`. -This can be used to limit the number of service-specific indices or data streams created. -Default: Not set (any service name is accepted). (array) - -|==== -| APM Server binary | `apm-server.auth.anonymous.allow_service` -| Fleet-managed | `Allowed Anonymous services` -|==== - -[float] -[[config-auth-anon-ip-limit]] -== IP limit -The number of unique IP addresses to track in an LRU cache. -IP addresses in the cache will be rate limited according to the <> setting. -Consider increasing this default if your application has many concurrent clients. -Default: `1000`. (int) - -|==== -| APM Server binary | `apm-server.auth.anonymous.rate_limit.ip_limit` -| Fleet-managed | `Anonymous Rate limit (IP limit)` -|==== - -[float] -[[config-auth-anon-event-limit]] -== Event limit -The maximum number of events allowed per second, per agent IP address. -Default: `300`. (int) - -|==== -| APM Server binary | `apm-server.auth.anonymous.rate_limit.event_limit` -| Fleet-managed | `Anonymous Event rate limit (event limit)` -|==== diff --git a/docs/configure/auth.asciidoc b/docs/configure/auth.asciidoc deleted file mode 100644 index 490c0e73108..00000000000 --- a/docs/configure/auth.asciidoc +++ /dev/null @@ -1,172 +0,0 @@ -[[apm-agent-auth]] -= APM agent authorization - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -Most options in this section are supported by all APM Server deployment methods. -**** - -Agent authorization APM Server configuration options. - -include::./tab-widgets/auth-config-widget.asciidoc[] - -[float] -[[api-key-auth-settings]] -= API key authentication options - -These settings apply to API key communication between the APM Server and APM Agents. - -NOTE: These settings are different from the API key settings used for {es} output and monitoring. - -[float] -== API key for agent authentication - -Enable API key authorization by setting `enabled` to `true`. -By default, `enabled` is set to `false`, and API key support is disabled. (bool) - -|==== -| APM Server binary | `auth.api_key.enabled` -| Fleet-managed | `API key for agent authentication` -|==== - -TIP: Not using Elastic APM agents? -When enabled, third-party APM agents must include a valid API key in the following format: -`Authorization: ApiKey `. The key must be the base64 encoded representation of the API key's `id:name`. - -[float] -== API key limit - -Each unique API key triggers one request to {es}. -This setting restricts the number of unique API keys are allowed per minute. -The minimum value for this setting should be the number of API keys configured in your monitored services. -The default `limit` is `100`. (int) - -|==== -| APM Server binary | `auth.api_key.limit` -| Fleet-managed | `Number of keys` -|==== - -[float] -== Secret token - -Authorization token for sending APM data. -The same token must also be set in each {apm-agent}. -This token is not used for RUM endpoints. (text) - -|==== -| APM Server binary | `auth.api_key.token` -| Fleet-managed | `Secret token` -|==== - -[float] -= `auth.api_key.elasticsearch.*` configuration options - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -The below options are only supported by the APM Server binary. - -All of the `auth.api_key.elasticsearch.*` configurations are optional. -If none are set, configuration settings from the `apm-server.output` section will be reused. -**** - -[float] -== `elasticsearch.hosts` - -API keys are fetched from {es}. -This configuration needs to point to a secured {es} cluster that is able to serve API key requests. - - -[float] -== `elasticsearch.protocol` - -The name of the protocol {es} is reachable on. -The options are: `http` or `https`. The default is `http`. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -== `elasticsearch.path` - -An optional HTTP path prefix that is prepended to the HTTP API calls. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -== `elasticsearch.proxy_url` - -The URL of the proxy to use when connecting to the {es} servers. -The value may be either a complete URL or a "host[:port]", in which case the "http"scheme is assumed. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -== `elasticsearch.timeout` - -The HTTP request timeout in seconds for the {es} request. -If nothing is configured, configuration settings from the `output` section will be reused. - -[float] -= `auth.api_key.elasticsearch.ssl.*` configuration options - -SSL is off by default. Set `elasticsearch.protocol` to `https` if you want to enable `https`. - -[float] -== `elasticsearch.ssl.enabled` - -Enable custom SSL settings. -Set to false to ignore custom SSL settings for secure communication. - -[float] -== `elasticsearch.ssl.verification_mode` - -Configure SSL verification mode. -If `none` is configured, all server hosts and certificates will be accepted. -In this mode, SSL based connections are susceptible to man-in-the-middle attacks. -**Use only for testing**. Default is `full`. - -[float] -== `elasticsearch.ssl.supported_protocols` - -List of supported/valid TLS versions. -By default, all TLS versions from 1.0 to 1.2 are enabled. - -[float] -== `elasticsearch.ssl.certificate_authorities` - -List of root certificates for HTTPS server verifications. - -[float] -== `elasticsearch.ssl.certificate` - -The path to the certificate for SSL client authentication. - -[float] -== `elasticsearch.ssl.key` - -The client certificate key used for client authentication. -This option is required if certificate is specified. - -[float] -== `elasticsearch.ssl.key_passphrase` - -An optional passphrase used to decrypt an encrypted key stored in the configured key file. - -[float] -== `elasticsearch.ssl.cipher_suites` - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library’s default suites are used (recommended). - -[float] -== `elasticsearch.ssl.curve_types` - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -[float] -== `elasticsearch.ssl.renegotiation` - -Configure what types of renegotiation are supported. -Valid options are `never`, `once`, and `freely`. Default is `never`. - -* `never` - Disables renegotiation. -* `once` - Allows a remote server to request renegotiation once per connection. -* `freely` - Allows a remote server to repeatedly request renegotiation. diff --git a/docs/configure/binary-no-fm-yes.svg b/docs/configure/binary-no-fm-yes.svg deleted file mode 100644 index b8b3120f2fc..00000000000 --- a/docs/configure/binary-no-fm-yes.svg +++ /dev/null @@ -1,13 +0,0 @@ - - - - - - - - - - - - - diff --git a/docs/configure/binary-yes-fm-no.svg b/docs/configure/binary-yes-fm-no.svg deleted file mode 100644 index db26e2fc39b..00000000000 --- a/docs/configure/binary-yes-fm-no.svg +++ /dev/null @@ -1,13 +0,0 @@ - - - - - - - - - - - - - diff --git a/docs/configure/binary-yes-fm-yes.svg b/docs/configure/binary-yes-fm-yes.svg deleted file mode 100644 index 07c0a2705f8..00000000000 --- a/docs/configure/binary-yes-fm-yes.svg +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - - - - - - - diff --git a/docs/configure/env.asciidoc b/docs/configure/env.asciidoc deleted file mode 100644 index 86742ca7577..00000000000 --- a/docs/configure/env.asciidoc +++ /dev/null @@ -1,98 +0,0 @@ -[[config-env]] -= Use environment variables in the configuration - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -This documentation is only relevant for APM Server binary users. -**** - -You can use environment variable references in the config file to -set values that need to be configurable during deployment. To do this, use: - -`${VAR}` - -Where `VAR` is the name of the environment variable. - -Each variable reference is replaced at startup by the value of the environment -variable. The replacement is case-sensitive and occurs before the YAML file is -parsed. References to undefined variables are replaced by empty strings unless -you specify a default value or custom error text. - -To specify a default value, use: - -`${VAR:default_value}` - -Where `default_value` is the value to use if the environment variable is -undefined. - -To specify custom error text, use: - -`${VAR:?error_text}` - -Where `error_text` is custom text that will be prepended to the error -message if the environment variable cannot be expanded. - -If you need to use a literal `${` in your configuration file then you can write -`$${` to escape the expansion. - -After changing the value of an environment variable, you need to restart -{beatname_uc} to pick up the new value. - -[NOTE] -================================== -You can also specify environment variables when you override a config -setting from the command line by using the `-E` option. For example: - -`-E name=${NAME}` - -================================== - -[float] -== Examples - -Here are some examples of configurations that use environment variables -and what each configuration looks like after replacement: - -[options="header"] -|================================== -|Config source |Environment setting |Config after replacement -|`name: ${NAME}` |`export NAME=elastic` |`name: elastic` -|`name: ${NAME}` |no setting |`name:` -|`name: ${NAME:beats}` |no setting |`name: beats` -|`name: ${NAME:beats}` |`export NAME=elastic` |`name: elastic` -|`name: ${NAME:?You need to set the NAME environment variable}` |no setting | None. Returns an error message that's prepended with the custom text. -|`name: ${NAME:?You need to set the NAME environment variable}` |`export NAME=elastic` | `name: elastic` -|================================== - -[float] -== Specify complex objects in environment variables - -You can specify complex objects, such as lists or dictionaries, in environment -variables by using a JSON-like syntax. - -As with JSON, dictionaries and lists are constructed using `{}` and `[]`. But -unlike JSON, the syntax allows for trailing commas and slightly different string -quotation rules. Strings can be unquoted, single-quoted, or double-quoted, as a -convenience for simple settings and to make it easier for you to mix quotation -usage in the shell. Arrays at the top-level do not require brackets (`[]`). - -For example, the following environment variable is set to a list: - -[source,yaml] -------------------------------------------------------------------------------- -ES_HOSTS="10.45.3.2:9220,10.45.3.1:9230" -------------------------------------------------------------------------------- - -You can reference this variable in the config file: - -[source,yaml] -------------------------------------------------------------------------------- -output.elasticsearch: - hosts: '${ES_HOSTS}' -------------------------------------------------------------------------------- - -When {beatname_uc} loads the config file, it resolves the environment variable and -replaces it with the specified list before reading the `hosts` setting. - -NOTE: Do not use double-quotes (`"`) to wrap regular expressions, or the backslash (`\`) will be interpreted as an escape character. diff --git a/docs/configure/general.asciidoc b/docs/configure/general.asciidoc deleted file mode 100644 index b8c4ae56c8d..00000000000 --- a/docs/configure/general.asciidoc +++ /dev/null @@ -1,183 +0,0 @@ -[[configuration-process]] -= General configuration options - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -Most options on this page are supported by all APM Server deployment methods. -**** - -General APM Server configuration options. - -include::./tab-widgets/general-config-widget.asciidoc[] - -[float] -[[configuration-apm-server]] -= Configuration options - -[[host]] -[float] -== Host -Defines the host and port the server is listening on. -Use `"unix:/path/to.sock"` to listen on a Unix domain socket. -Defaults to 'localhost:8200'. (text) - -|==== -| APM Server binary | `apm-server.host` -| Fleet-managed | `Host` -|==== - -[float] -== URL -The publicly reachable server URL. For deployments on Elastic Cloud or ECK, the default is unchangeable. - -|==== -| APM Server binary | N/A -| Fleet-managed | `URL` -|==== - -[[max_header_size]] -[float] -== Max header size -Maximum permitted size of a request's header accepted by the server to be processed (in Bytes). -Defaults to 1048576 Bytes (1 MB). (int) - -|==== -| APM Server binary | `apm-server.max_header_size` -| Fleet-managed | `Maximum size of a request's header` -|==== - -[[idle_timeout]] -[float] -== Idle timeout -Maximum amount of time to wait for the next incoming request before underlying connection is closed. -Defaults to `45s` (45 seconds). (text) - -|==== -| APM Server binary | `apm-server.idle_timeout` -| Fleet-managed | `Idle time before underlying connection is closed` -|==== - -[[read_timeout]] -[float] -== Read timeout -Maximum permitted duration for reading an entire request. -Defaults to `3600s` (3600 seconds). (text) - -|==== -| APM Server binary | `apm-server.read_timeout` -| Fleet-managed | `Maximum duration for reading an entire request` -|==== - -[[write_timeout]] -[float] -== Write timeout -Maximum permitted duration for writing a response. -Defaults to `30s` (30 seconds). (text) - -|==== -| APM Server binary | `apm-server.write_timeout` -| Fleet-managed | `Maximum duration for writing a response` -|==== - -[[shutdown_timeout]] -[float] -=== Shutdown timeout -Maximum duration in seconds before releasing resources when shutting down the server. -Defaults to `30s` (30 seconds). (text) - -|==== -| APM Server binary | `apm-server.shutdown_timeout` -| Fleet-managed | `Maximum duration before releasing resources when shutting down` -|==== - -[[max_event_size]] -[float] -== Max event size -Maximum permitted size of an event accepted by the server to be processed (in Bytes). -Defaults to `307200` Bytes. (int) - -|==== -| APM Server binary | `apm-server.max_event_size` -| Fleet-managed | `Maximum size per event` -|==== - -[[max_connections]] -[float] -== Max connections -Maximum number of TCP connections to accept simultaneously. -Default value is 0, which means _unlimited_. (int) - -|==== -| APM Server binary | `apm-server.max_connections` -| Fleet-managed | `Simultaneously accepted connections` -|==== - -[[custom_http_headers]] -[float] -== Custom HTTP response headers -Custom HTTP headers to add to HTTP responses. Useful for security policy compliance. (text) - -|==== -| APM Server binary | `apm-server.response_headers` -| Fleet-managed | `Custom HTTP response headers` -|==== - -[[capture_personal_data]] -[float] -== Capture personal data -If true, -APM Server captures the IP of the instrumented service and its User Agent if any. -Enabled by default. (bool) - -|==== -| APM Server binary | `apm-server.capture_personal_data` -| Fleet-managed | `Capture personal data` -|==== - - -[[default_service_environment]] -[float] -== Default service environment -Sets the default service environment to associate with data and requests received from agents which have no service environment defined. Default: none. (text) - -|==== -| APM Server binary | `apm-server.default_service_environment` -| Fleet-managed | `Default Service Environment` -|==== - -[[expvar.enabled]] -[float] -== expvar support -When set to true APM Server exposes https://golang.org/pkg/expvar/[golang expvar] under `/debug/vars`. -Disabled by default. - -|==== -| APM Server binary | `apm-server.expvar.enabled` -| Fleet-managed | `Enable APM Server Golang expvar support` -|==== - -[[expvar.url]] -[float] -== expvar URL -Configure the URL to expose expvar. -Defaults to `debug/vars`. - -|==== -| APM Server binary | `apm-server.expvar.url` -| Fleet-managed | N/A -|==== - -[[data_streams.namespace]] -[float] -== Data stream namespace - -Change the default namespace. -This setting changes the name of the integration's data stream. - -For {fleet}-managed users, the namespace is inherited from the selected {agent} policy. - -|==== -| APM Server binary | `apm-server.data_streams.namespace` -| Fleet-managed | `Namespace` (Integration settings > Advanced options) -|==== diff --git a/docs/configure/index.asciidoc b/docs/configure/index.asciidoc deleted file mode 100644 index 3b2b62d32b8..00000000000 --- a/docs/configure/index.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[[configuring-howto-apm-server]] -= Configure - -How you configure the APM Server depends on your deployment method. - -* **APM Server binary** users need to edit the `apm-server.yml` configuration file. -The location of the file varies by platform. To locate the file, see <>. -* **Fleet-managed** users configure the APM Server directly in {kib}. -Each configuration page describes the specific location. -* **Elastic cloud** users should see {cloud}/ec-manage-apm-settings.html[Add APM user settings] for information on how to configure Elastic APM. - -The following topics describe how to configure APM Server: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::general.asciidoc[leveloffset=+1] - -include::anonymous-auth.asciidoc[leveloffset=+1] - -include::auth.asciidoc[leveloffset=+1] - -include::agent-config.asciidoc[leveloffset=+1] - -include::instrumentation.asciidoc[leveloffset=+1] - -include::kibana.asciidoc[leveloffset=+1] - -include::logging.asciidoc[leveloffset=+1] - -include::output.asciidoc[leveloffset=+1] - -include::path.asciidoc[leveloffset=+1] - -include::rum.asciidoc[leveloffset=+1] - -include::tls.asciidoc[leveloffset=+1] - -include::sampling.asciidoc[leveloffset=+1] - -include::env.asciidoc[leveloffset=+1] \ No newline at end of file diff --git a/docs/configure/instrumentation.asciidoc b/docs/configure/instrumentation.asciidoc deleted file mode 100644 index 2f381001e1d..00000000000 --- a/docs/configure/instrumentation.asciidoc +++ /dev/null @@ -1,62 +0,0 @@ -[[configuration-instrumentation]] -= Configure APM instrumentation - -++++ -Instrumentation -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -Instrumentation of APM Server is not yet supported for Fleet-managed APM. -**** - -APM Server uses the Elastic APM Go Agent to instrument its publishing pipeline. -To gain insight into the performance of {beatname_uc}, you can enable this instrumentation and send trace data to APM Server. -Currently, only the {es} output is instrumented. - -Example configuration with instrumentation enabled: - -["source","yaml"] ----- -instrumentation: - enabled: true - environment: production - hosts: - - "http://localhost:8200" - api_key: L5ER6FEvjkmlfalBealQ3f3fLqf03fazfOV ----- - -[float] -== Configuration options - -You can specify the following options in the `instrumentation` section of the +{beatname_lc}.yml+ config file: - -[float] -=== `enabled` - -Set to `true` to enable instrumentation of {beatname_uc}. -Defaults to `false`. - -[float] -=== `environment` - -Set the environment in which {beatname_uc} is running, for example, `staging`, `production`, `dev`, etc. -Environments can be filtered in the {kibana-ref}/xpack-apm.html[{apm-app}]. - -[float] -=== `hosts` - -The {apm-guide-ref}/getting-started-apm-server.html[APM Server] hosts to report instrumentation data to. -Defaults to `http://localhost:8200`. - -[float] -=== `api_key` - -{apm-guide-ref}/api-key.html[API key] used to secure communication with the APM Server(s). -If `api_key` is set then `secret_token` will be ignored. - -[float] -=== `secret_token` - -{apm-guide-ref}/secret-token.html[Secret token] used to secure communication with the APM Server(s). diff --git a/docs/configure/kibana.asciidoc b/docs/configure/kibana.asciidoc deleted file mode 100644 index fb070965ad3..00000000000 --- a/docs/configure/kibana.asciidoc +++ /dev/null @@ -1,116 +0,0 @@ -[[setup-kibana-endpoint]] -= Configure the {kib} endpoint - -++++ -{kib} endpoint -++++ - -**** - -image:./binary-yes-fm-no.svg[supported deployment methods] - -You must configure the {kib} endpoint when running the APM Server binary with a non-{es} output. -Configuring the {kib} endpoint allows the APM Server to communicate with {kib} and ensure that the APM integration was properly set up. It is also required for APM agent configuration when using -an output other than {es}. - -For all other use-cases, starting in version 8.7.0, APM agent configurations is fetched directly from {es}. -Configuring and enabling the {kib} endpoint is only used as a fallback. -Please see <> instead. -**** - -Here's a sample configuration: - -[source,yaml] ----- -apm-server.kibana.enabled: true -apm-server.kibana.host: "http://localhost:5601" ----- - -[float] -== {kib} endpoint configuration options - -You can specify the following options in the `apm-server.kibana` section of the -+{beatname_lc}.yml+ config file. These options are not required for a Fleet-managed APM Server. - -[float] -[[kibana-enabled]] -=== `apm-server.kibana.enabled` - -Defaults to `false`. Must be `true` to use APM Agent configuration. - -[float] -[[kibana-host]] -=== `apm-server.kibana.host` - -The {kib} host that APM Server will communicate with. The default is -`127.0.0.1:5601`. The value of `host` can be a `URL` or `IP:PORT`. For example: `http://192.15.3.2`, `192:15.3.2:5601` or `http://192.15.3.2:6701/path`. If no -port is specified, `5601` is used. - -NOTE: When a node is defined as an `IP:PORT`, the _scheme_ and _path_ are taken -from the <> and -<> config options. - -IPv6 addresses must be defined using the following format: -`https://[2001:db8::1]:5601`. - -[float] -[[kibana-protocol-option]] -=== `apm-server.kibana.protocol` - -The name of the protocol {kib} is reachable on. The options are: `http` or -`https`. The default is `http`. However, if you specify a URL for host, the -value of `protocol` is overridden by whatever scheme you specify in the URL. - -Example config: - -[source,yaml] ----- -apm-server.kibana.host: "192.0.2.255:5601" -apm-server.kibana.protocol: "http" -apm-server.kibana.path: /kibana ----- - - -[float] -=== `apm-server.kibana.username` - -The basic authentication username for connecting to {kib}. - -[float] -=== `apm-server.kibana.password` - -The basic authentication password for connecting to {kib}. - -[float] -=== `apm-server.kibana.api_key` - -Authentication with an API key. Formatted as `id:api_key` - -[float] -[[kibana-path-option]] -=== `apm-server.kibana.path` - -An HTTP path prefix that is prepended to the HTTP API calls. This is useful for -the cases where {kib} listens behind an HTTP reverse proxy that exports the API -under a custom prefix. - -[float] -=== `apm-server.kibana.ssl.enabled` - -Enables {beatname_uc} to use SSL settings when connecting to {kib} via HTTPS. -If you configure {beatname_uc} to connect over HTTPS, this setting defaults to -`true` and {beatname_uc} uses the default SSL settings. - -Example configuration: - -[source,yaml] ----- -apm-server.kibana.host: "https://192.0.2.255:5601" -apm-server.kibana.ssl.enabled: true -apm-server.kibana.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] -apm-server.kibana.ssl.certificate: "/etc/pki/client/cert.pem" -apm-server.kibana.ssl.key: "/etc/pki/client/cert.key" ----- - -For information on the additional SSL configuration options, -see <>. diff --git a/docs/configure/logging.asciidoc b/docs/configure/logging.asciidoc deleted file mode 100644 index 9e11b5fbf40..00000000000 --- a/docs/configure/logging.asciidoc +++ /dev/null @@ -1,247 +0,0 @@ -[[configuration-logging]] -= Configure logging - -++++ -Logging -++++ - -**** - -image:./binary-yes-fm-no.svg[supported deployment methods] - -These configuration options are only relevant to APM Server binary users. -Fleet-managed users should see {fleet-guide}/monitor-elastic-agent.html[View {agent} logs] -to learn how to view logs and change the logging level of {agent}. -**** - -The `logging` section of the +{beatname_lc}.yml+ config file contains options -for configuring the logging output. - -The logging system can write logs to the syslog or rotate log files. If logging -is not explicitly configured the file output is used. - -["source","yaml",subs="attributes"] ----- -logging.level: info -logging.to_files: true -logging.files: - path: /var/log/{beatname_lc} - name: {beatname_lc} - keepfiles: 7 - permissions: 0640 ----- - - - -TIP: In addition to setting logging options in the config file, you can modify -the logging output configuration from the command line. See -<>. - -WARNING: When {beatname_uc} is running on a Linux system with systemd, it uses -by default the `-e` command line option, that makes it write all the logging output -to stderr so it can be captured by journald. Other outputs are disabled. See -<> to know more and learn how to change this. - -[float] -== Configuration options - -You can specify the following options in the `logging` section of the -+{beatname_lc}.yml+ config file: - -ifndef::serverless[] -[float] -=== `logging.to_stderr` - -When true, writes all logging output to standard error output. This is -equivalent to using the `-e` command line option. - -[float] -=== `logging.to_syslog` - -When true, writes all logging output to the syslog. - -NOTE: This option is not supported on Windows. - -[float] -=== `logging.to_eventlog` - -When true, writes all logging output to the Windows Event Log. - -[float] -=== `logging.to_files` - -When true, writes all logging output to files. The log files are automatically -rotated when the log file size limit is reached. - -NOTE: {beatname_uc} only creates a log file if there is logging output. For -example, if you set the log <> to `error` and there are no -errors, there will be no log file in the directory specified for logs. -endif::serverless[] - -[float] -[[level]] -=== `logging.level` - -Minimum log level. One of `debug`, `info`, `warning`, or `error`. The default -log level is `info`. - -`debug`:: Logs debug messages, including a detailed printout of all events -flushed. Also logs informational messages, warnings, errors, and -critical errors. When the log level is `debug`, you can specify a list of -<> to display debug messages for specific components. If -no selectors are specified, the `*` selector is used to display debug messages -for all components. - -`info`:: Logs informational messages, including the number of events that are -published. Also logs any warnings, errors, or critical errors. - -`warning`:: Logs warnings, errors, and critical errors. - -`error`:: Logs errors and critical errors. - -[float] -[[selectors]] -=== `logging.selectors` - -The list of debugging-only selector tags used by different {beatname_uc} components. -Use `*` to enable debug output for all components. Use `publisher` to display -debug messages related to event publishing. - -[TIP] -===== -The list of available selectors may change between releases, so avoid creating -tests that depend on specific selectors. - -To see which selectors are available, run {beatname_uc} in debug mode -(set `logging.level: debug` in the configuration). The selector name appears -after the log level and is enclosed in brackets. -===== - -To configure multiple selectors, use the following {beats-ref}/config-file-format.html[YAML list syntax]: -["source","yaml",subs="attributes"] ----- -logging.selectors: [ harvester, input ] ----- - -ifndef::serverless[] -To override selectors at the command line, use the `-d` global flag (`-d` also -sets the debug log level). For more information, see <>. -endif::serverless[] - -[float] -=== `logging.metrics.enabled` - -By default, {beatname_uc} periodically logs its internal metrics that have -changed in the last period. For each metric that changed, the delta from the -value at the beginning of the period is logged. Also, the total values for all -non-zero internal metrics are logged on shutdown. Set this to false to disable -this behavior. The default is true. - -Here is an example log line: - -[source,shell] ----------------------------------------------------------------------------------------------------------------------------------------------------- -2017-12-17T19:17:42.667-0500 INFO [metrics] log/log.go:110 Non-zero metrics in the last 30s: beat.info.uptime.ms=30004 beat.memstats.gc_next=5046416 ----------------------------------------------------------------------------------------------------------------------------------------------------- - -Note that we currently offer no backwards compatible guarantees for the internal -metrics and for this reason they are also not documented. - -[float] -=== `logging.metrics.period` - -The period after which to log the internal metrics. The default is `30s`. - -ifndef::serverless[] -[float] -=== `logging.files.path` - -The directory that log files are written to. The default is the logs path. See -the <> section for details. - -[float] -=== `logging.files.name` - -The name of the file that logs are written to. The default is '{beatname_lc}'. - -[float] -=== `logging.files.rotateeverybytes` - -The maximum size of a log file. If the limit is reached, a new log file is -generated. The default size limit is 10485760 (10 MB). - -[float] -=== `logging.files.keepfiles` - -The number of most recent rotated log files to keep on disk. Older files are -deleted during log rotation. The default value is 7. The `keepfiles` options has -to be in the range of 2 to 1024 files. - -[float] -=== `logging.files.permissions` - -The permissions mask to apply when rotating log files. The default value is -0600. The `permissions` option must be a valid Unix-style file permissions mask -expressed in octal notation. In Go, numbers in octal notation must start with -'0'. - -The most permissive mask allowed is 0640. If a higher permissions mask is -specified via this setting, it will be subject to an umask of 0027. - -Examples: - -* 0640: give read and write access to the file owner, and read access to members of the group associated with the file. -* 0600: give read and write access to the file owner, and no access to all others. - -[float] -=== `logging.files.interval` - -Enable log file rotation on time intervals in addition to size-based rotation. -Intervals must be at least `1s`. Values of `1m`, `1h`, `24h`, `7*24h`, `30*24h`, and `365*24h` -are boundary-aligned with minutes, hours, days, weeks, months, and years as -reported by the local system clock. All other intervals are calculated from the -Unix epoch. Defaults to disabled. -endif::serverless[] - -[float] -=== `logging.files.rotateonstartup` - -If the log file already exists on startup, immediately rotate it and start -writing to a new file instead of appending to the existing one. Defaults to -true. - -ifndef::serverless[] -[float] -=== `logging.files.redirect_stderr` experimental[] - -When true, diagnostic messages printed to {beatname_uc}'s standard error output -will also be logged to the log file. This can be helpful in situations were -{beatname_uc} terminates unexpectedly because an error has been detected by -Go's runtime but diagnostic information is not present in the log file. -This feature is only available when logging to files (`logging.to_files` is true). -Disabled by default. -endif::serverless[] - -[float] -== Logging format - -The logging format is generally the same for each logging output. The one -exception is with the syslog output where the timestamp is not included in the -message because syslog adds its own timestamp. - -Each log message consists of the following parts: - -* Timestamp in ISO8601 format -* Level -* Logger name contained in brackets (Optional) -* File name and line number of the caller -* Message -* Structured data encoded in JSON (Optional) - -Below are some samples: - -`2017-12-17T18:54:16.241-0500 INFO logp/core_test.go:13 unnamed global logger` - -`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:16 some message` - -`2017-12-17T18:54:16.242-0500 INFO [example] logp/core_test.go:19 some message {"x": 1}` diff --git a/docs/configure/output.asciidoc b/docs/configure/output.asciidoc deleted file mode 100644 index a6ea0c8d213..00000000000 --- a/docs/configure/output.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -[[configuring-output]] -= Configure the output - -++++ -Output -++++ - -Output configuration options. - -// You configure {beatname_uc} to write to a specific output by setting options -// in the Outputs section of the +{beatname_lc}.yml+ config file. Only a single -// output may be defined. - -// The following topics describe how to configure each supported output. If you've -// secured the {stack}, also read <> for more about -// security-related configuration options. - -include::outputs/outputs-list.asciidoc[tag=outputs-list] - -[[sourcemap-output]] - -[float] -== Source maps - -Source maps can be uploaded through all outputs but must eventually be stored in {es}. -When using outputs other than {es}, `source_mapping.elasticsearch` must be set for source maps to be applied. -Be sure to update `source_mapping.index_pattern` if source maps are stored in the non-default location. -See <> for more details. - -include::outputs/outputs-list.asciidoc[tag=outputs-include] diff --git a/docs/configure/outputs/codec.asciidoc b/docs/configure/outputs/codec.asciidoc deleted file mode 100644 index b6045b798b0..00000000000 --- a/docs/configure/outputs/codec.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[configuration-output-codec]] -== Change the output codec - -For outputs that do not require a specific encoding, you can change the encoding -by using the codec configuration. You can specify either the `json` or `format` -codec. By default the `json` codec is used. - -*`json.pretty`*: If `pretty` is set to true, events will be nicely formatted. The default is false. - -*`json.escape_html`*: If `escape_html` is set to true, HTML symbols will be escaped in strings. The default is false. - -Example configuration that uses the `json` codec with pretty printing enabled to write events to the console: - -[source,yaml] ------------------------------------------------------------------------------- -output.console: - codec.json: - pretty: true - escape_html: false ------------------------------------------------------------------------------- - -*`format.string`*: Configurable format string used to create a custom formatted message. - -Example configurable that uses the `format` codec to print the events timestamp and message field to console: - -[source,yaml] ------------------------------------------------------------------------------- -output.console: - codec.format: - string: '%{[@timestamp]} %{[message]}' ------------------------------------------------------------------------------- diff --git a/docs/configure/outputs/console.asciidoc b/docs/configure/outputs/console.asciidoc deleted file mode 100644 index c50c4825d58..00000000000 --- a/docs/configure/outputs/console.asciidoc +++ /dev/null @@ -1,68 +0,0 @@ -[[console-output]] -== Configure the Console output - -++++ -Console -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -The Console output is not yet supported by {fleet}-managed APM Server. -**** - -The Console output writes events in JSON format to stdout. - -WARNING: The Console output should be used only for debugging issues as it can produce a large amount of logging data. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the console output by adding `output.console`. - -Example configuration: - -[source,yaml] ------------------------------------------------------------------------------- -output.console: - pretty: true ------------------------------------------------------------------------------- - -ifdef::apm-server[] -[float] -=== {kib} configuration - -include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] -endif::[] - -=== Configuration options - -You can specify the following `output.console` options in the +{beatname_lc}.yml+ config file: - -==== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `true`. - -==== `pretty` - -If `pretty` is set to true, events written to stdout will be nicely formatted. The default is false. - -==== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded using the `pretty` option. - -See <> for more information. - -==== `bulk_max_size` - -The maximum number of events to buffer internally during publishing. The default is 2048. - -Specifying a larger batch size may add some latency and buffering during publishing. However, for Console output, this -setting does not affect how events are published. - -Setting `bulk_max_size` to values less than or equal to 0 disables the -splitting of batches. When splitting is disabled, the queue decides on the -number of events to be contained in a batch. - -include::codec.asciidoc[leveloffset=+1] \ No newline at end of file diff --git a/docs/configure/outputs/elasticsearch.asciidoc b/docs/configure/outputs/elasticsearch.asciidoc deleted file mode 100644 index d0dba86f424..00000000000 --- a/docs/configure/outputs/elasticsearch.asciidoc +++ /dev/null @@ -1,474 +0,0 @@ -[[elasticsearch-output]] -== Configure the {es} output - -++++ -{es} -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -This documentation only applies to APM Server binary users. -Fleet-managed users should see {fleet-guide}/elasticsearch-output.html[Configure the {es} output]. -**** - -The {es} output sends events directly to {es} using the {es} HTTP API. - -Example configuration: - -["source","yaml",subs="attributes"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] <1> ----- -<1> To enable SSL, add `https` to all URLs defined under __hosts__. - -When sending data to a secured cluster through the `elasticsearch` -output, {beatname_uc} can use any of the following authentication methods: - -* Basic authentication credentials (username and password). -* Token-based (API key) authentication. -* Public Key Infrastructure (PKI) certificates. - -*Basic authentication:* - -["source","yaml",subs="attributes,callouts"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - username: "{beat_default_index_prefix}_writer" - password: "{pwd}" ----- - -*API key authentication:* - -["source","yaml",subs="attributes,callouts"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - api_key: "ZCV7VnwBgnX0T19fN8Qe:KnR6yE41RrSowb0kQ0HWoA" ----- - -*PKI certificate authentication:* - -["source","yaml",subs="attributes,callouts"] ----- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - ssl.certificate: "/etc/pki/client/cert.pem" - ssl.key: "/etc/pki/client/cert.key" ----- - -See <> for details on each authentication method. - -=== Compatibility - -This output works with all compatible versions of {es}. See the -https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support -Matrix]. - -=== Configuration options - -You can specify the following options in the `elasticsearch` section of the +{beatname_lc}.yml+ config file: - -==== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to `false`, the output is disabled. - -The default value is `true`. - - -[[hosts-option]] -==== `hosts` - -The list of {es} nodes to connect to. The events are distributed to -these nodes in round robin order. If one node becomes unreachable, the event is -automatically sent to another node. Each {es} node can be defined as a `URL` or `IP:PORT`. -For example: `http://192.15.3.2`, `https://es.found.io:9230` or `192.24.3.2:9300`. -If no port is specified, `9200` is used. - -NOTE: When a node is defined as an `IP:PORT`, the _scheme_ and _path_ are taken from the -<> and <> config options. - -[source,yaml] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] <1> - protocol: https - path: /elasticsearch ------------------------------------------------------------------------------- - -In the previous example, the {es} nodes are available at `https://10.45.3.2:9220/elasticsearch` and -`https://10.45.3.1:9230/elasticsearch`. - -==== `compression_level` - -The gzip compression level. Setting this value to `0` disables compression. -The compression level must be in the range of `1` (best speed) to `9` (best compression). - -Increasing the compression level will reduce the network usage but will increase the CPU usage. - -The default value is `0`. - -==== `escape_html` - -Configure escaping of HTML in strings. Set to `true` to enable escaping. - -The default value is `false`. - -==== `api_key` - -Instead of using a username and password, you can use API keys to secure communication -with {es}. The value must be the ID of the API key and the API key joined by a colon: `id:api_key`. - -See <> for more information. - -==== `username` - -The basic authentication username for connecting to {es}. - -This user needs the privileges required to publish events to {es}. -To create a user like this, see <>. - -==== `password` - -The basic authentication password for connecting to {es}. - -==== `parameters` - -Dictionary of HTTP parameters to pass within the URL with index operations. - -[[protocol-option]] -==== `protocol` - -The name of the protocol {es} is reachable on. The options are: -`http` or `https`. The default is `http`. However, if you specify a URL for -<>, the value of `protocol` is overridden by whatever scheme you -specify in the URL. - -[[path-option]] -==== `path` - -An HTTP path prefix that is prepended to the HTTP API calls. This is useful for -the cases where {es} listens behind an HTTP reverse proxy that exports -the API under a custom prefix. - -==== `headers` - -Custom HTTP headers to add to each request created by the {es} output. -Example: - -[source,yaml] ------------------------------------------------------------------------------- -output.elasticsearch.headers: - X-My-Header: Header contents ------------------------------------------------------------------------------- - -It is possible to specify multiple header values for the same header -name by separating them with a comma. - -==== `proxy_url` - -The URL of the proxy to use when connecting to the {es} servers. The -value may be either a complete URL or a "host[:port]", in which case the "http" -scheme is assumed. If a value is not specified through the configuration file -then proxy environment variables are used. See the -https://golang.org/pkg/net/http/#ProxyFromEnvironment[Go documentation] -for more information about the environment variables. - -// output.elasticsearch.index has been removed from APM Server -ifndef::apm-server[] - -[[index-option-es]] -==== `index` - -The index name to write events to when you're using daily indices. The default is -+"{beatname_lc}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}"+, for example, -+"{beatname_lc}-{version}-{localdate}"+. If you change this setting, you also -need to configure the `setup.template.name` and `setup.template.pattern` options -(see <>). - -ifndef::no_dashboards[] -If you are using the pre-built {kib} -dashboards, you also need to set the `setup.dashboards.index` option (see -<>). -endif::no_dashboards[] - -ifndef::no_ilm[] -When <> is enabled, the default `index` is -+"{beatname_lc}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}-%{index_num}"+, for example, -+"{beatname_lc}-{version}-{localdate}-000001"+. Custom `index` settings are ignored -when {ilm-init} is enabled. If you’re sending events to a cluster that supports index -lifecycle management, see <> to learn how to change the index name. -endif::no_ilm[] - -You can set the index dynamically by using a format string to access any event -field. For example, this configuration uses a custom field, `fields.log_type`, -to set the index: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - index: "%{[fields.log_type]}-%{[{beat_version_key}]}-%{+yyyy.MM.dd}" <1> ------------------------------------------------------------------------------- - -<1> We recommend including +{beat_version_key}+ in the name to avoid mapping issues -when you upgrade. - -With this configuration, all events with `log_type: normal` are sent to an -index named +normal-{version}-{localdate}+, and all events with -`log_type: critical` are sent to an index named -+critical-{version}-{localdate}+. - -See the <> setting for other ways to set the index -dynamically. -endif::apm-server[] - -// output.elasticsearch.indices has been removed from APM Server -ifndef::apm-server[] - -[[indices-option-es]] -==== `indices` - -An array of index selector rules. Each rule specifies the index to use for -events that match the rule. During publishing, {beatname_uc} uses the first -matching rule in the array. Rules can contain conditionals, format string-based -fields, and name mappings. If the `indices` setting is missing or no rule -matches, the <> setting is used. - -ifndef::no_ilm[] -Similar to `index`, defining custom `indices` will disable <>. -endif::no_ilm[] - -Rule settings: - -*`index`*:: The index format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule fails. - -*`mappings`*:: A dictionary that takes the value returned by `index` and maps it -to a new name. - -*`default`*:: The default string value to use if `mappings` does not find a -match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -The following example sets the index based on whether the `message` field -contains the specified string: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - indices: - - index: "warning-%{[{beat_version_key}]}-%{+yyyy.MM.dd}" - when.contains: - message: "WARN" - - index: "error-%{[{beat_version_key}]}-%{+yyyy.MM.dd}" - when.contains: - message: "ERR" ------------------------------------------------------------------------------- - - -This configuration results in indices named +warning-{version}-{localdate}+ -and +error-{version}-{localdate}+ (plus the default index if no matches are -found). - -The following example sets the index by taking the name returned by the `index` -format string and mapping it to a new name that's used for the index: - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - indices: - - index: "%{[fields.log_type]}" - mappings: - critical: "sev1" - normal: "sev2" - default: "sev3" ------------------------------------------------------------------------------- - - -This configuration results in indices named `sev1`, `sev2`, and `sev3`. - -The `mappings` setting simplifies the configuration, but is limited to string -values. You cannot specify format strings within the mapping pairs. -endif::apm-server[] - -ifndef::no_ilm[] -[[ilm-es]] -==== `ilm` - -Configuration options for {ilm}. - -See <> for more information. -endif::no_ilm[] - -ifndef::no-pipeline[] -[[pipeline-option-es]] -==== `pipeline` - -A format string value that specifies the ingest node pipeline to write events to. - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipeline: my_pipeline_id ------------------------------------------------------------------------------- - -For more information, see <>. - -You can set the ingest node pipeline dynamically by using a format string to -access any event field. For example, this configuration uses a custom field, -`fields.log_type`, to set the pipeline for each event: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipeline: "%{[fields.log_type]}_pipeline" ------------------------------------------------------------------------------- - -With this configuration, all events with `log_type: normal` are sent to a pipeline -named `normal_pipeline`, and all events with `log_type: critical` are sent to a -pipeline named `critical_pipeline`. - -TIP: To learn how to add custom fields to events, see the -<> option. - -See the <> setting for other ways to set the -ingest node pipeline dynamically. - -[[pipelines-option-es]] -==== `pipelines` - -An array of pipeline selector rules. Each rule specifies the ingest node -pipeline to use for events that match the rule. During publishing, {beatname_uc} -uses the first matching rule in the array. Rules can contain conditionals, -format string-based fields, and name mappings. If the `pipelines` setting is -missing or no rule matches, the <> setting is -used. - -Rule settings: - -*`pipeline`*:: The pipeline format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule -fails. - -*`mappings`*:: A dictionary that takes the value returned by `pipeline` and maps -it to a new name. - -*`default`*:: The default string value to use if `mappings` does not find a -match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -The following example sends events to a specific pipeline based on whether the -`message` field contains the specified string: - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipelines: - - pipeline: "warning_pipeline" - when.contains: - message: "WARN" - - pipeline: "error_pipeline" - when.contains: - message: "ERR" ------------------------------------------------------------------------------- - - -The following example sets the pipeline by taking the name returned by the -`pipeline` format string and mapping it to a new name that's used for the -pipeline: - -["source","yaml"] ------------------------------------------------------------------------------- -output.elasticsearch: - hosts: ["http://localhost:9200"] - pipelines: - - pipeline: "%{[fields.log_type]}" - mappings: - critical: "sev1_pipeline" - normal: "sev2_pipeline" - default: "sev3_pipeline" ------------------------------------------------------------------------------- - - -With this configuration, all events with `log_type: critical` are sent to -`sev1_pipeline`, all events with `log_type: normal` are sent to a -`sev2_pipeline`, and all other events are sent to `sev3_pipeline`. - -For more information about ingest node pipelines, see -<>. - -endif::[] - -==== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - -==== `flush_bytes` - -The bulk request size threshold, in bytes, before flushing to {es}. -The value must have a suffix, e.g. `"2MB"`. The default is `1MB`. - -==== `flush_interval` - -The maximum duration to accumulate events for a bulk request before being flushed to {es}. -The value must have a duration suffix, e.g. `"5s"`. The default is `1s`. - -==== `backoff.init` - -The number of seconds to wait before trying to reconnect to {es} after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - - -==== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -{es} after a network error. The default is `60s`. - -==== `timeout` - -The HTTP request timeout in seconds for the {es} request. The default is 90. - -==== `ssl` - -Configuration options for SSL parameters like the certificate authority to use -for HTTPS-based connections. If the `ssl` section is missing, the host CAs are used for HTTPS connections to -{es}. - -See the <> guide -or <> for more information. - -// Elasticsearch security -include::{docdir}/https.asciidoc[] diff --git a/docs/configure/outputs/kafka.asciidoc b/docs/configure/outputs/kafka.asciidoc deleted file mode 100644 index 72c32eeedaf..00000000000 --- a/docs/configure/outputs/kafka.asciidoc +++ /dev/null @@ -1,331 +0,0 @@ -[[kafka-output]] -== Configure the Kafka output - -++++ -Kafka -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -The Kafka output is not yet supported by {fleet}-managed APM Server. -**** - -The Kafka output sends events to Apache Kafka. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the Kafka output by uncommenting the -Kafka section. - -Example configuration: - -[source,yaml] ------------------------------------------------------------------------------- -output.kafka: - # initial brokers for reading cluster metadata - hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"] - - # message topic selection + partitioning - topic: '%{[fields.log_topic]}' - partition.round_robin: - reachable_only: false - - required_acks: 1 - compression: gzip - max_message_bytes: 1000000 ------------------------------------------------------------------------------- - -NOTE: Events bigger than <> will be dropped. To avoid this problem, make sure {beatname_uc} does not generate events bigger than <>. - -ifdef::apm-server[] -[float] -=== {kib} configuration - -include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] -endif::[] - -[[kafka-compatibility]] -=== Compatibility - -This output works with all Kafka versions in between 0.11 and 2.2.2. Older versions -might work as well, but are not supported. - -=== Configuration options - -You can specify the following options in the `kafka` section of the +{beatname_lc}.yml+ config file: - -==== `enabled` - -The `enabled` config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -ifndef::apm-server[] -The default value is `true`. -endif::[] -ifdef::apm-server[] -The default value is `false`. -endif::[] - -==== `hosts` - -The list of Kafka broker addresses from where to fetch the cluster metadata. -The cluster metadata contain the actual Kafka brokers events are published to. - -==== `version` - -Kafka version {beatname_lc} is assumed to run against. Defaults to 1.0.0. - -Event timestamps will be added, if version 0.10.0.0+ is enabled. - -Valid values are all Kafka releases in between `0.8.2.0` and `2.0.0`. - -See <> for information on supported versions. - -==== `username` - -The username for connecting to Kafka. If username is configured, the password -must be configured as well. - -==== `password` - -The password for connecting to Kafka. - -==== `sasl.mechanism` - -beta[] - -The SASL mechanism to use when connecting to Kafka. It can be one of: - -* `PLAIN` for SASL/PLAIN. -* `SCRAM-SHA-256` for SCRAM-SHA-256. -* `SCRAM-SHA-512` for SCRAM-SHA-512. - -If `sasl.mechanism` is not set, `PLAIN` is used if `username` and `password` -are provided. Otherwise, SASL authentication is disabled. - - -[[topic-option-kafka]] -==== `topic` - -The Kafka topic used for produced events. - -You can set the topic dynamically by using a format string to access any -event field. For example, this configuration uses a custom field, -`fields.log_topic`, to set the topic for each event: - -[source,yaml] ------ -topic: '%{[fields.log_topic]}' ------ - -See the <> setting for other ways to set the -topic dynamically. - -[[topics-option-kafka]] -==== `topics` - -An array of topic selector rules. Each rule specifies the `topic` to use for -events that match the rule. During publishing, {beatname_uc} sets the `topic` -for each event based on the first matching rule in the array. Rules -can contain conditionals, format string-based fields, and name mappings. If the -`topics` setting is missing or no rule matches, the -<> field is used. - -Rule settings: - -*`topic`*:: The topic format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule -fails. - -*`mappings`*:: A dictionary that takes the value returned by `topic` and maps it -to a new name. - -*`default`*:: The default string value to use if `mappings` does not find a -match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -The following example sets the topic based on whether the message field contains -the specified string: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.kafka: - hosts: ["localhost:9092"] - topic: "logs-%{[agent.version]}" - topics: - - topic: "critical-%{[agent.version]}" - when.contains: - message: "CRITICAL" - - topic: "error-%{[agent.version]}" - when.contains: - message: "ERR" ------------------------------------------------------------------------------- - - -This configuration results in topics named +critical-{version}+, -+error-{version}+, and +logs-{version}+. - -==== `key` - -Optional formatted string specifying the Kafka event key. If configured, the -event key can be extracted from the event using a format string. - -See the Kafka documentation for the implications of a particular choice of key; -by default, the key is chosen by the Kafka cluster. - -==== `partition` - -Kafka output broker event partitioning strategy. Must be one of `random`, -`round_robin`, or `hash`. By default the `hash` partitioner is used. - -*`random.group_events`*: Sets the number of events to be published to the same - partition, before the partitioner selects a new partition by random. The - default value is 1 meaning after each event a new partition is picked randomly. - -*`round_robin.group_events`*: Sets the number of events to be published to the - same partition, before the partitioner selects the next partition. The default - value is 1 meaning after each event the next partition will be selected. - -*`hash.hash`*: List of fields used to compute the partitioning hash value from. - If no field is configured, the events `key` value will be used. - -*`hash.random`*: Randomly distribute events if no hash or key value can be computed. - -All partitioners will try to publish events to all partitions by default. If a -partition's leader becomes unreachable for the beat, the output might block. All -partitioners support setting `reachable_only` to overwrite this -behavior. If `reachable_only` is set to `true`, events will be published to -available partitions only. - -NOTE: Publishing to a subset of available partitions potentially increases resource usage because events may become unevenly distributed. - -==== `client_id` - -The configurable client ID used for logging, debugging, and auditing purposes. The default is "beats". - -==== `worker` - -The number of concurrent load-balanced Kafka output workers. - -==== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded. - -See <> for more information. - -==== `metadata` - -Kafka metadata update settings. The metadata do contain information about -brokers, topics, partition, and active leaders to use for publishing. - -*`refresh_frequency`*:: Metadata refresh interval. Defaults to 10 minutes. - -*`full`*:: Strategy to use when fetching metadata, when this option is `true`, the client will maintain -a full set of metadata for all the available topics, if the this option is set to `false` it will only refresh the -metadata for the configured topics. The default is false. - -*`retry.max`*:: Total number of metadata update retries when cluster is in middle of leader election. The default is 3. - -*`retry.backoff`*:: Waiting time between retries during leader elections. Default is `250ms`. - -==== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - -==== `backoff.init` - -The number of seconds to wait before trying to republish to Kafka -after a network error. After waiting `backoff.init` seconds, {beatname_uc} -tries to republish. If the attempt fails, the backoff timer is increased -exponentially up to `backoff.max`. After a successful publish, the backoff -timer is reset. The default is `1s`. - -==== `backoff.max` - -The maximum number of seconds to wait before attempting to republish to -Kafka after a network error. The default is `60s`. - -==== `bulk_max_size` - -The maximum number of events to bulk in a single Kafka request. The default is 2048. - -==== `bulk_flush_frequency` - -Duration to wait before sending bulk Kafka request. 0 is no delay. The default is 0. - -==== `timeout` - -The number of seconds to wait for responses from the Kafka brokers before timing -out. The default is 30 (seconds). - -==== `broker_timeout` - -The maximum duration a broker will wait for number of required ACKs. The default is `10s`. - -==== `channel_buffer_size` - -Per Kafka broker number of messages buffered in output pipeline. The default is 256. - -==== `keep_alive` - -The keep-alive period for an active network connection. If `0s`, keep-alives are disabled. The default is `0s`. - -==== `compression` - -Sets the output compression codec. Must be one of `none`, `snappy`, `lz4` and `gzip`. The default is `gzip`. - -[IMPORTANT] -.Known issue with Azure Event Hub for Kafka -==== -When targeting Azure Event Hub for Kafka, set `compression` to `none` as the provided codecs are not supported. -==== - -==== `compression_level` - -Sets the compression level used by gzip. Setting this value to 0 disables compression. -The compression level must be in the range of 1 (best speed) to 9 (best compression). - -Increasing the compression level will reduce the network usage but will increase the CPU usage. - -The default value is 4. - -[[kafka-max_message_bytes]] -==== `max_message_bytes` - -The maximum permitted size of JSON-encoded messages. Bigger messages will be dropped. The default value is 1000000 (bytes). This value should be equal to or less than the broker's `message.max.bytes`. - -==== `required_acks` - -The ACK reliability level required from broker. 0=no response, 1=wait for local commit, -1=wait for all replicas to commit. The default is 1. - -Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silently on error. - -==== `enable_krb5_fast` - -beta[] - -Enable Kerberos FAST authentication. This may conflict with some Active Directory installations. It is separate from the standard Kerberos settings because this flag only applies to the Kafka output. The default is `false`. - -==== `ssl` - -Configuration options for SSL parameters like the root CA for Kafka connections. - The Kafka host keystore should be created with the -`-keyalg RSA` argument to ensure it uses a cipher supported by -https://github.com/Shopify/sarama/wiki/Frequently-Asked-Questions#why-cant-sarama-connect-to-my-kafka-cluster-using-ssl[{filebeat}'s Kafka library]. -See <> for more information. diff --git a/docs/configure/outputs/logstash.asciidoc b/docs/configure/outputs/logstash.asciidoc deleted file mode 100644 index 59290980cd8..00000000000 --- a/docs/configure/outputs/logstash.asciidoc +++ /dev/null @@ -1,341 +0,0 @@ -[[logstash-output]] -== Configure the {ls} output - -++++ -{ls} -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -The {ls} output is not yet supported by {fleet}-managed APM Server. -**** - -{ls} allows for additional processing and routing of APM events. -The {ls} output sends events directly to {ls} using the lumberjack -protocol, which runs over TCP. - -[float] -== Send events to {ls} - -To send events to {ls}, you must: - -. <> -. <> -. <> - -[float] -[[ls-output-config]] -=== {ls} output configuration - -To enable the {ls} output in APM Server, -edit the `apm-server.yml` file to: - -. Disable the {es} output by commenting it out and -. Enable the {ls} output by uncommenting the {ls} section and setting `enabled` to `true`: -+ -[source,yaml] ----- -output.logstash: - enabled: true - hosts: ["localhost:5044"] <1> ----- -<1> The `hosts` option specifies the {ls} server and the port (`5044`) where {ls} is configured to listen for incoming -APM Server connections. - -[float] -[[ls-kib-config]] -=== {kib} endpoint configuration - -include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] - -[float] -[[ls-config-pipeline]] -=== {ls} configuration pipeline - -Finally, you must create a {ls} configuration pipeline that listens for incoming -APM Server connections and indexes received events into {es}. - -. Use the {logstash-ref}/plugins-inputs-elastic_agent.html[Elastic Agent input plugin] to configure -{ls} to receive events from the APM Server. A minimal `input` config might look like this: -+ -[source,conf] ----- -input { - elastic_agent { - port => 5044 - } -} ----- - -. Use the {logstash-ref}/plugins-outputs-elasticsearch.html[{es} output plugin] to send -events to {es} for indexing. A minimal `output` config might look like this: -+ -[source,conf] ----- -output { - elasticsearch { - data_stream => "true" <1> - cloud_id => "YOUR_CLOUD_ID_HERE" <2> - cloud_auth => "YOUR_CLOUD_AUTH_HERE" <2> - } -} ----- -<1> Enables indexing into {es} data streams. -<2> This example assumes you're sending data to {ecloud}. If you're using a self-hosted version of {es}, use `hosts` instead. See {logstash-ref}/plugins-outputs-elasticsearch.html[{es} output plugin] for more information. - -Here's what your basic {ls} configuration file will look like when we put everything together: - -[source,conf] ----- -input { - elastic_agent { - port => 5044 - } -} - -output { - elasticsearch { - data_stream => "true" - cloud_id => "YOUR_CLOUD_ID_HERE" - cloud_auth => "YOUR_CLOUD_AUTH_HERE" - } -} ----- - -[float] -== Accessing the @metadata field - -Every event sent to {ls} contains a special field called -{logstash-ref}/event-dependent-configuration.html#metadata[`@metadata`] that you can -use in {ls} for conditionals, filtering, indexing and more. -APM Server sends the following `@metadata` to {ls}: - -["source","json",subs="attributes"] ----- -{ - ... - "@metadata": { - "beat": "apm-server", <1> - "version": "{version}" <2> - } -} ----- -<1> To change the default `apm-server` value, set the -<> option in the APM Server config file. -<2> The current version of APM Server. - -In addition to `@metadata`, APM Server provides other potentially useful fields, like the -`data_stream` field, which can be used to conditionally operate on -{apm-guide-ref}/data-model.html[event types], namespaces, or datasets. - -As an example, you might want to use {ls} to route all `metric` events to the same custom metrics data stream, -rather than to service-specific data streams: - -["source","json",subs="attributes"] ----- -output { - if [@metadata][beat] == "apm-server" { <1> - if [data_stream][type] == "metrics" { <2> - elasticsearch { - index => "%{[data_stream][type]}-custom-%{[data_stream][namespace]}" <3> - action => "create" <4> - cloud_id => "${CLOUD_ID}" <5> - cloud_auth => "${CLOUD_AUTH}" <5> - } - } else { - elasticsearch { - data_stream => "true" <6> - cloud_id => "${CLOUD_ID}" - cloud_auth => "${CLOUD_AUTH}" - } - } - } -} ----- -<1> Only apply this output if the data is being sent from the APM Server -<2> Determine if the event type is `metric` -<3> If the event type is `metric`, output to a custom data stream: `metrics-custom-` -<4> You must explicitly set `action` to `create when using {ls} to output an index to a data stream -<5> In this example, `cloud_id` and `cloud_auth` are stored as {logstash-ref}/environment-variables.html[environment variables] -<6> For all other event types, index data directly into the predefined APM data steams - -=== Compatibility - -This output works with all compatible versions of {ls}. See the -https://www.elastic.co/support/matrix#matrix_compatibility[Elastic Support -Matrix]. - -=== Configuration options - -You can specify the following options in the `logstash` section of the -+{beatname_lc}.yml+ config file: - -==== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `false`. - -[[hosts]] -==== `hosts` - -The list of known {ls} servers to connect to. If load balancing is disabled, but -multiple hosts are configured, one host is selected randomly (there is no precedence). -If one host becomes unreachable, another one is selected randomly. - -All entries in this list can contain a port number. The default port number 5044 will be used if no number is given. - -==== `compression_level` - -The gzip compression level. Setting this value to 0 disables compression. -The compression level must be in the range of 1 (best speed) to 9 (best compression). - -Increasing the compression level will reduce the network usage but will increase the CPU usage. - -The default value is 3. - -==== `escape_html` - -Configure escaping of HTML in strings. Set to `true` to enable escaping. - -The default value is `false`. - -==== `worker` - -The number of workers per configured host publishing events to {ls}. This -is best used with load balancing mode enabled. Example: If you have 2 hosts and -3 workers, in total 6 workers are started (3 for each host). - -[[loadbalance]] -==== `loadbalance` - -If set to true and multiple {ls} hosts are configured, the output plugin -load balances published events onto all {ls} hosts. If set to false, -the output plugin sends all events to only one host (determined at random) and -will switch to another host if the selected one becomes unresponsive. The default value is false. - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["localhost:5044", "localhost:5045"] - loadbalance: true - index: {beatname_lc} ------------------------------------------------------------------------------- - -==== `ttl` - -Time to live for a connection to {ls} after which the connection will be re-established. -Useful when {ls} hosts represent load balancers. Since the connections to {ls} hosts -are sticky, operating behind load balancers can lead to uneven load distribution between the instances. -Specifying a TTL on the connection allows to achieve equal connection distribution between the -instances. Specifying a TTL of 0 will disable this feature. - -The default value is 0. - -NOTE: The "ttl" option is not yet supported on an asynchronous {ls} client (one with the "pipelining" option set). - -==== `pipelining` - -Configures the number of batches to be sent asynchronously to {ls} while waiting -for ACK from {ls}. Output only becomes blocking once number of `pipelining` -batches have been written. Pipelining is disabled if a value of 0 is -configured. The default value is 2. - -==== `proxy_url` - -The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The -value must be a URL with a scheme of `socks5://`. The protocol used to -communicate to {ls} is not based on HTTP so a web-proxy cannot be used. - -If the SOCKS5 proxy server requires client authentication, then a username and -password can be embedded in the URL as shown in the example. - -When using a proxy, hostnames are resolved on the proxy server instead of on the -client. You can change this behavior by setting the -<> option. - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["remote-host:5044"] - proxy_url: socks5://user:password@socks5-proxy:2233 ------------------------------------------------------------------------------- - -[[logstash-proxy-use-local-resolver]] -==== `proxy_use_local_resolver` - -The `proxy_use_local_resolver` option determines if {ls} hostnames are -resolved locally when using a proxy. The default value is false, which means -that when a proxy is used the name resolution occurs on the proxy server. - -[[logstash-index]] -==== `index` - -The index root name to write events to. The default is `apm-server`. For -example +"{beat_default_index_prefix}"+ generates +"[{beat_default_index_prefix}-]{version}-YYYY.MM.DD"+ -indices (for example, +"{beat_default_index_prefix}-{version}-2017.04.26"+). - -NOTE: This parameter's value will be assigned to the `metadata.beat` field. It -can then be accessed in {ls}'s output section as `%{[@metadata][beat]}`. - -==== `ssl` - -Configuration options for SSL parameters like the root CA for {ls} connections. See -<> for more information. To use SSL, you must also configure the -{logstash-ref}/plugins-inputs-beats.html[{beats} input plugin for {ls}] to use SSL/TLS. - -==== `timeout` - -The number of seconds to wait for responses from the {ls} server before timing out. The default is 30 (seconds). - -==== `max_retries` - -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. - -==== `bulk_max_size` - -The maximum number of events to bulk in a single {ls} request. The default is 2048. - -If the Beat sends single events, the events are collected into batches. If the Beat publishes -a large batch of events (larger than the value specified by `bulk_max_size`), the batch is -split. - -Specifying a larger batch size can improve performance by lowering the overhead of sending events. -However big batch sizes can also increase processing times, which might result in -API errors, killed connections, timed-out publishing requests, and, ultimately, lower -throughput. - -Setting `bulk_max_size` to values less than or equal to 0 disables the -splitting of batches. When splitting is disabled, the queue decides on the -number of events to be contained in a batch. - -==== `slow_start` - -If enabled, only a subset of events in a batch of events is transferred per transaction. -The number of events to be sent increases up to `bulk_max_size` if no error is encountered. -On error, the number of events per transaction is reduced again. - -The default is `false`. - -==== `backoff.init` - -The number of seconds to wait before trying to reconnect to {ls} after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - -==== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -{ls} after a network error. The default is `60s`. - -// Logstash security -include::{docdir}/shared-ssl-logstash-config.asciidoc[] diff --git a/docs/configure/outputs/output-cloud.asciidoc b/docs/configure/outputs/output-cloud.asciidoc deleted file mode 100644 index 5b5e0ff12ea..00000000000 --- a/docs/configure/outputs/output-cloud.asciidoc +++ /dev/null @@ -1,55 +0,0 @@ -[[configure-cloud-id]] -== Configure the output for {ess} on {ecloud} - -[subs="attributes"] -++++ -{ess} -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -This documentation only applies to APM Server binary users. -**** - -ifdef::apm-server[] -NOTE: This page refers to using a separate instance of APM Server with an existing -{ess-product}[{ess} deployment]. -If you want to use APM on {ess}, see: -{cloud}/ec-create-deployment.html[Create your deployment] and -{cloud}/ec-manage-apm-settings.html[Add APM user settings]. -endif::apm-server[] - -{beatname_uc} comes with two settings that simplify the output configuration -when used together with {ess-product}[{ess}]. When defined, -these setting overwrite settings from other parts in the configuration. - -Example: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" -cloud.auth: "elastic:{pwd}" ------------------------------------------------------------------------------- - -These settings can be also specified at the command line, like this: - - -["source","sh",subs="attributes"] ------------------------------------------------------------------------------- -{beatname_lc} -e -E cloud.id="" -E cloud.auth="" ------------------------------------------------------------------------------- - - -=== `cloud.id` - -The Cloud ID, which can be found in the {ess} web console, is used by -{beatname_uc} to resolve the {es} and {kib} URLs. This setting -overwrites the `output.elasticsearch.hosts` and `setup.kibana.host` settings. - -=== `cloud.auth` - -When specified, the `cloud.auth` overwrites the `output.elasticsearch.username` and -`output.elasticsearch.password` settings. Because the {kib} settings inherit -the username and password from the {es} output, this can also be used -to set the `setup.kibana.username` and `setup.kibana.password` options. diff --git a/docs/configure/outputs/outputs-list.asciidoc b/docs/configure/outputs/outputs-list.asciidoc deleted file mode 100644 index b0b925c0de5..00000000000 --- a/docs/configure/outputs/outputs-list.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -//# tag::outputs-list[] -* <> -* <> -* <> -* <> -* <> -* <> -//# end::outputs-list[] - -//# tag::outputs-include[] -include::output-cloud.asciidoc[] - -include::elasticsearch.asciidoc[] - -include::logstash.asciidoc[] - -include::kafka.asciidoc[] - -include::redis.asciidoc[] - -include::console.asciidoc[] - -//# end::outputs-include[] diff --git a/docs/configure/outputs/redis.asciidoc b/docs/configure/outputs/redis.asciidoc deleted file mode 100644 index 595de2445d0..00000000000 --- a/docs/configure/outputs/redis.asciidoc +++ /dev/null @@ -1,247 +0,0 @@ -[[redis-output]] -== Configure the Redis output - -++++ -Redis -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -The Redis output is not yet supported by {fleet}-managed APM Server. -**** - -The Redis output inserts the events into a Redis list or a Redis channel. -This output plugin is compatible with -the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html[Redis input plugin] for {ls}. - -To use this output, edit the {beatname_uc} configuration file to disable the {es} -output by commenting it out, and enable the Redis output by adding `output.redis`. - -Example configuration: - -["source","yaml",subs="attributes"] ------------------------------------------------------------------------------- -output.redis: - hosts: ["localhost"] - password: "my_password" - key: "{beatname_lc}" - db: 0 - timeout: 5 ------------------------------------------------------------------------------- - -ifdef::apm-server[] -[float] -=== {kib} configuration - -include::../../shared-kibana-endpoint.asciidoc[tag=shared-kibana-config] -endif::[] - -=== Compatibility - -This output is expected to work with all Redis versions between 3.2.4 and 5.0.8. Other versions might work as well, -but are not supported. - -=== Configuration options - -You can specify the following `output.redis` options in the +{beatname_lc}.yml+ config file: - -==== `enabled` - -The enabled config is a boolean setting to enable or disable the output. If set -to false, the output is disabled. - -The default value is `true`. - -==== `hosts` - -The list of Redis servers to connect to. If load balancing is enabled, the events are -distributed to the servers in the list. If one server becomes unreachable, the events are -distributed to the reachable servers only. You can define each Redis server by specifying -`HOST` or `HOST:PORT`. For example: `"192.15.3.2"` or `"test.redis.io:12345"`. If you -don't specify a port number, the value configured by `port` is used. -Configure each Redis server with an `IP:PORT` pair or with a `URL`. For -example: `redis://localhost:6379` or `rediss://localhost:6379`. -URLs can include a server-specific password. For example: `redis://:password@localhost:6379`. -The `redis` scheme will disable the `ssl` settings for the host, while `rediss` -will enforce TLS. If `rediss` is specified and no `ssl` settings are -configured, the output uses the system certificate store. - -==== `index` - -The index name added to the events metadata for use by {ls}. The default is "{beatname_lc}". - -[[key-option-redis]] -==== `key` - -The name of the Redis list or channel the events are published to. If not -configured, the value of the `index` setting is used. - -You can set the key dynamically by using a format string to access any event -field. For example, this configuration uses a custom field, `fields.list`, to -set the Redis list key. If `fields.list` is missing, `fallback` is used: - -["source","yaml"] ------------------------------------------------------------------------------- -output.redis: - hosts: ["localhost"] - key: "%{[fields.list]:fallback}" ------------------------------------------------------------------------------- - -See the <> setting for other ways to set the key -dynamically. - -[[keys-option-redis]] -==== `keys` - -An array of key selector rules. Each rule specifies the `key` to use for events -that match the rule. During publishing, {beatname_uc} uses the first matching -rule in the array. Rules can contain conditionals, format string-based fields, -and name mappings. If the `keys` setting is missing or no rule matches, the -<> setting is used. - -Rule settings: - -*`index`*:: The key format string to use. If this string contains field -references, such as `%{[fields.name]}`, the fields must exist, or the rule -fails. - -*`mappings`*:: A dictionary that takes the value returned by `key` and maps it to -a new name. - -*`default`*:: The default string value to use if `mappings` does not find a match. - -*`when`*:: A condition that must succeed in order to execute the current rule. -ifndef::no-processors[] -All the <> supported by processors are also supported -here. -endif::no-processors[] - -Example `keys` settings: - -["source","yaml"] ------------------------------------------------------------------------------- -output.redis: - hosts: ["localhost"] - key: "default_list" - keys: - - key: "info_list" # send to info_list if `message` field contains INFO - when.contains: - message: "INFO" - - key: "debug_list" # send to debug_list if `message` field contains DEBUG - when.contains: - message: "DEBUG" - - key: "%{[fields.list]}" - mappings: - http: "frontend_list" - nginx: "frontend_list" - mysql: "backend_list" ------------------------------------------------------------------------------- - -==== `password` - -The password to authenticate with. The default is no authentication. - -==== `db` - -The Redis database number where the events are published. The default is 0. - -==== `datatype` - -The Redis data type to use for publishing events.If the data type is `list`, the -Redis RPUSH command is used and all events are added to the list with the key defined under `key`. -If the data type `channel` is used, the Redis `PUBLISH` command is used and means that all events -are pushed to the pub/sub mechanism of Redis. The name of the channel is the one defined under `key`. -The default value is `list`. - -==== `codec` - -Output codec configuration. If the `codec` section is missing, events will be JSON encoded. - -See <> for more information. - -==== `worker` - -The number of workers to use for each host configured to publish events to Redis. Use this setting along with the -`loadbalance` option. For example, if you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host). - -==== `loadbalance` - -If set to true and multiple hosts or workers are configured, the output plugin load balances published events onto all -Redis hosts. If set to false, the output plugin sends all events to only one host (determined at random) and will switch -to another host if the currently selected one becomes unreachable. The default value is true. - -==== `timeout` - -The Redis connection timeout in seconds. The default is 5 seconds. - -==== `backoff.init` - -The number of seconds to wait before trying to reconnect to Redis after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - -==== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -Redis after a network error. The default is `60s`. - -==== `max_retries` - -ifdef::ignores_max_retries[] -{beatname_uc} ignores the `max_retries` setting and retries indefinitely. -endif::[] - -ifndef::ignores_max_retries[] -The number of times to retry publishing an event after a publishing failure. -After the specified number of retries, the events are typically dropped. - -Set `max_retries` to a value less than 0 to retry until all events are published. - -The default is 3. -endif::[] - - -==== `bulk_max_size` - -The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. - -If the Beat sends single events, the events are collected into batches. If the -Beat publishes a large batch of events (larger than the value specified by -`bulk_max_size`), the batch is split. - -Specifying a larger batch size can improve performance by lowering the overhead -of sending events. However big batch sizes can also increase processing times, -which might result in API errors, killed connections, timed-out publishing -requests, and, ultimately, lower throughput. - -Setting `bulk_max_size` to values less than or equal to 0 disables the -splitting of batches. When splitting is disabled, the queue decides on the -number of events to be contained in a batch. - -==== `ssl` - -Configuration options for SSL parameters like the root CA for Redis connections -guarded by SSL proxies (for example https://www.stunnel.org[stunnel]). See -<> for more information. - -==== `proxy_url` - -The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The -value must be a URL with a scheme of `socks5://`. You cannot use a web proxy -because the protocol used to communicate with Redis is not based on HTTP. - -If the SOCKS5 proxy server requires client authentication, you can embed -a username and password in the URL. - -When using a proxy, hostnames are resolved on the proxy server instead of on the -client. You can change this behavior by setting the -<> option. - -[[redis-proxy-use-local-resolver]] -==== `proxy_use_local_resolver` - -This option determines whether Redis hostnames are resolved locally when using a proxy. -The default value is false, which means that name resolution occurs on the proxy server. diff --git a/docs/configure/path.asciidoc b/docs/configure/path.asciidoc deleted file mode 100644 index 27c720ab6ee..00000000000 --- a/docs/configure/path.asciidoc +++ /dev/null @@ -1,120 +0,0 @@ -[[configuration-path]] -= Configure project paths - -++++ -Project paths -++++ - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -This documentation is only relevant for APM Server binary users. -Fleet-managed paths are defined in <>. -**** - -The `path` section of the +{beatname_lc}.yml+ config file contains configuration -options that define where {beatname_uc} looks for its files. For example, {beatname_uc} -looks for the {es} template file in the configuration path and writes -log files in the logs path. -ifdef::has_registry[] -{beatname_uc} looks for its registry files in the data path. -endif::[] - -Please see the <> section for more details. - -Here is an example configuration: - -[source,yaml] ------------------------------------------------------------------------------- -path.home: /usr/share/beat -path.config: /etc/beat -path.data: /var/lib/beat -path.logs: /var/log/ ------------------------------------------------------------------------------- - -Note that it is possible to override these options by using command line flags. - -[float] -== Configuration options - -You can specify the following options in the `path` section of the +{beatname_lc}.yml+ config file: - -[float] -=== `home` - -The home path for the {beatname_uc} installation. This is the default base path for all -other path settings and for miscellaneous files that come with the distribution (for example, the -sample dashboards). If not set by a CLI flag or in the configuration file, the default -for the home path is the location of the {beatname_uc} binary. - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.home: /usr/share/beats ------------------------------------------------------------------------------- - -[float] -=== `config` - -The configuration path for the {beatname_uc} installation. This is the default base path -for configuration files, including the main YAML configuration file and the -{es} template file. If not set by a CLI flag or in the configuration file, the default for the -configuration path is the home path. - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.config: /usr/share/beats/config ------------------------------------------------------------------------------- - -[float] -=== `data` - -The data path for the {beatname_uc} installation. This is the default base path for all -the files in which {beatname_uc} needs to store its data. If not set by a CLI -flag or in the configuration file, the default for the data path is a `data` -subdirectory inside the home path. - - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.data: /var/lib/beats ------------------------------------------------------------------------------- - -TIP: When running multiple {beatname_uc} instances on the same host, make sure they -each have a distinct `path.data` value. - -[float] -=== `logs` - -The logs path for a {beatname_uc} installation. This is the default location for {beatname_uc}'s -log files. If not set by a CLI flag or in the configuration file, the default -for the logs path is a `logs` subdirectory inside the home path. - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -path.logs: /var/log/beats ------------------------------------------------------------------------------- - -[float] -=== `system.hostfs` - -Specifies the mount point of the host's file system for use in monitoring a host. -This can either be set in the config, or with the `--system.hostfs` CLI flag. This is used for cgroup self-monitoring. -ifeval::["{beatname_lc}"=="metricbeat"] -This is also used by the system module to read files from `/proc` and `/sys`. -endif::[] - - -Example: - -[source,yaml] ------------------------------------------------------------------------------- -system.hostfs: /mount/rootfs ------------------------------------------------------------------------------- diff --git a/docs/configure/rum.asciidoc b/docs/configure/rum.asciidoc deleted file mode 100644 index 2bc25ec9621..00000000000 --- a/docs/configure/rum.asciidoc +++ /dev/null @@ -1,206 +0,0 @@ -[[configuration-rum]] -= Configure Real User Monitoring (RUM) - -++++ -Real User Monitoring (RUM) -++++ - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -Most options in this section are supported by all APM Server deployment methods. -**** - -The {apm-rum-ref-v}/index.html[Real User Monitoring (RUM) agent] captures user interactions with clients such as web browsers. -These interactions are sent as events to the APM Server. -Because the RUM agent runs on the client side, the connection between agent and server is unauthenticated. -As a security precaution, RUM is therefore disabled by default. - -include::./tab-widgets/rum-config-widget.asciidoc[] - -In addition, if APM Server is deployed in an origin different than the page’s origin, -you will need to configure {apm-rum-ref-v}/configuring-cors.html[Cross-Origin Resource Sharing (CORS)] in the Agent. - -[float] -[[enable-rum-support]] -= Configuration reference - -[[rum-enable]] -[float] -== Enable RUM -To enable RUM support, set to `true`. -By default this is disabled. (bool) - -|==== -| APM Server binary | `apm-server.rum.enabled` -| Fleet-managed | `Enable RUM` -|==== - -[NOTE] -==== -If an <> or <> is configured, -enabling RUM support will automatically enable <>. -Anonymous authentication is required as the RUM agent runs in the browser. -==== - -[float] -[[rum-allow-origins]] -== Allowed Origins -A list of permitted origins for RUM support. -User-agents send an Origin header that will be validated against this list. -This is done automatically by modern browsers as part of the https://www.w3.org/TR/cors/[CORS specification]. -An origin is made of a protocol scheme, host and port, without the URL path. - -Default: `['*']` (allows everything). (text) - -|==== -| APM Server binary | `apm-server.rum.allow_origins` -| Fleet-managed | `Allowed Origins` -|==== - -[float] -[[rum-allow-headers]] -== Access-Control-Allow-Headers -HTTP requests made from the RUM agent to the APM Server are limited in the HTTP headers they are allowed to have. -If any other headers are added, the request will be rejected by the browser due to Cross-Origin Resource Sharing (CORS) restrictions. -Use this setting to allow additional headers. -The default list of allowed headers includes "Content-Type", "Content-Encoding", and "Accept"; -custom values configured here are appended to the default list and used as the value for the `Access-Control-Allow-Headers` header. - -Default: `[]`. (text) - -|==== -| APM Server binary | `apm-server.rum.allow_headers` -| Fleet-managed | `Access-Control-Allow-Headers` -|==== - -[float] -[[rum-response-headers]] -== Custom HTTP response headers -Custom HTTP headers to add to RUM responses. -This can be useful for security policy compliance. - -Values set for the same key will be concatenated. - -Default: none. (text) - -|==== -| APM Server binary | `apm-server.rum.response_headers` -| Fleet-managed | `Custom HTTP response headers` -|==== - -[float] -[[rum-library-pattern]] -== Library Frame Pattern -RegExp to be matched against a stack trace frame's `file_name` and `abs_path` attributes. -If the RegExp matches, the stack trace frame is considered to be a library frame. -When source mapping is applied, the `error.culprit` is set to reflect the _function_ and the _filename_ -of the first non library frame. -This aims to provide an entry point for identifying issues. - -Default: `"node_modules|bower_components|~"`. (text) - -|==== -| APM Server binary | `apm-server.rum.library_pattern` -| Fleet-managed | `Library Frame Pattern` -|==== - -[float] -== Exclude from grouping -RegExp to be matched against a stack trace frame's `file_name`. -If the RegExp matches, the stack trace frame is excluded from being used for calculating error groups. - -Default: `"^/webpack"` (excludes stack trace frames that have a filename starting with `/webpack`). (text) - -|==== -| APM Server binary | `apm-server.rum.exclude_from_grouping` -| Fleet-managed | `Exclude from grouping` -|==== - - -[float] -[[rum-source-map]] -= Source map configuration options - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -Source maps are supported by all APM Server deployment methods, however, -the options in this section are only supported by the APM Server binary. -**** - -[[config-sourcemapping-enabled]] -[float] -== `source_mapping.enabled` -Used to enable/disable <> for RUM events. -When enabled, the APM Server needs additional privileges to read source maps. -See <> for more details. - -Default: `true` - -[[config-sourcemapping-elasticsearch]] -[float] -== `source_mapping.elasticsearch` -Configure the {es} source map retrieval location, taking the same options as <>. -This must be set when using an output other than {es}, and that output is writing to {es}. -Otherwise leave this section empty. - -[[rum-sourcemap-cache]] -[float] -== `source_mapping.cache.expiration` -If a source map has been uploaded to the APM Server, -<> is automatically applied to documents sent to the RUM endpoint. -Source maps are fetched from {es} and then kept in an in-memory cache for the configured time. -Values configured without a time unit are treated as seconds. - -Default: `5m` (5 minutes) - -[float] -== `source_mapping.index_pattern` -Previous versions of APM Server stored source maps in `apm-%{[observer.version]}-sourcemap` indices. -Search source maps stored in an older version with this setting. - -Default: `"apm-*-sourcemap*"` - -[float] -[[rum-deprecated]] -= Deprecated configuration options - -[float] -[[event_rate.limit]] -== `event_rate.limit` - -deprecated::[7.15.0, Replaced by <>.] - -The maximum number of events allowed per second, per agent IP address. - -Default: `300` - -[float] -== `event_rate.lru_size` - -deprecated::[7.15.0, Replaced by <>.] - -The number of unique IP addresses to track in an LRU cache. -IP addresses in the cache will be rate limited according to the <> setting. -Consider increasing this default if your site has many concurrent clients. - -Default: `1000` - -[float] -[[rum-allow-service-names]] -== `allow_service_names` - -deprecated::[7.15.0, Replaced by <>.] -A list of permitted service names for RUM support. -Names in this list must match the agent's `service.name`. -This can be set to restrict RUM events to those with one of a set of known service names, -in order to limit the number of service-specific indices or data streams created. - -Default: Not set (any service name is accepted) - -[float] -= Ingest pipelines - -The default APM Server pipeline includes processors that enrich RUM data prior to indexing in {es}. -See <> for details on how to locate, edit, or disable this preprocessing. \ No newline at end of file diff --git a/docs/configure/sampling.asciidoc b/docs/configure/sampling.asciidoc deleted file mode 100644 index 9a4e78fe83b..00000000000 --- a/docs/configure/sampling.asciidoc +++ /dev/null @@ -1,138 +0,0 @@ -[[tail-based-samling-config]] -= Tail-based sampling - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -Most options on this page are supported by all APM Server deployment methods. -**** - -Tail-based sampling configuration options. - -include::./tab-widgets/sampling-config-widget.asciidoc[] - -[float] -[[configuration-tbs]] -= Top-level tail-based sampling settings - -See <> to learn more. - -:input-type: ref -// tag::tbs-top[] - -[float] -[id="sampling-tail-enabled-{input-type}"] -== Enable tail-based sampling -Set to `true` to enable tail based sampling. -Disabled by default. (bool) - -|==== -| APM Server binary | `sampling.tail.enabled` -| Fleet-managed | `Enable tail-based sampling` -|==== - -[float] -[id="sampling-tail-interval-{input-type}"] -== Interval -Synchronization interval for multiple APM Servers. -Should be in the order of tens of seconds or low minutes. -Default: `1m` (1 minute). (duration) - -|==== -| APM Server binary | `sampling.tail.interval` -| Fleet-managed | `Interval` -|==== - -[float] -[id="sampling-tail-policies-{input-type}"] -== Policies -Criteria used to match a root transaction to a sample rate. - -Policies map trace events to a sample rate. -Each policy must specify a sample rate. -Trace events are matched to policies in the order specified. -All policy conditions must be true for a trace event to match. -Each policy list should conclude with a policy that only specifies a sample rate. -This final policy is used to catch remaining trace events that don't match a stricter policy. -(`[]policy`) - -|==== -| APM Server binary | `sampling.tail.policies` -| Fleet-managed | `Policies` -|==== - -[float] -[id="sampling-tail-storage_limit-{input-type}"] -== Storage limit -The amount of storage space allocated for trace events matching tail sampling policies. Caution: Setting this limit higher than the allowed space may cause APM Server to become unhealthy. - -If the configured storage limit is insufficient, it logs "configured storage limit reached". The event will bypass sampling and will always be indexed when storage limit is reached. - -Default: `3GB`. (text) - -|==== -| APM Server binary | `sampling.tail.storage_limit` -| Fleet-managed | `Storage limit` -|==== - -// end::tbs-top[] - -[float] -[[configuration-tbs-policy]] -= Policy-level tail-based sampling settings - -See <> to learn more. - -// tag::tbs-policy[] - -[float] -[id="sampling-tail-sample-rate-{input-type}"] -== Sample rate - -**`sample_rate`** - -The sample rate to apply to trace events matching this policy. -Required in each policy. - -The sample rate must be greater than or equal to `0` and less than or equal to `1`. -For example, a `sample_rate` of `0.01` means that 1% of trace events matching the policy will be sampled. -A `sample_rate` of `1` means that 100% of trace events matching the policy will be sampled. (int) - -[float] -[id="sampling-tail-trace-name-{input-type}"] -== Trace name - -**`trace.name`** - -The trace name for events to match a policy. -A match occurs when the configured `trace.name` matches the `transaction.name` of the root transaction of a trace. -A root transaction is any transaction without a `parent.id`. (string) - -[float] -[id="sampling-tail-trace-outcome-{input-type}"] -== Trace outcome - -**`trace.outcome`** - -The trace outcome for events to match a policy. -A match occurs when the configured `trace.outcome` matches a trace's `event.outcome` field. -Trace outcome can be `success`, `failure`, or `unknown`. (string) - -[float] -[id="sampling-tail-service-name-{input-type}"] -== Service name - -**`service.name`** - -The service name for events to match a policy. (string) - -[float] -[id="sampling-tail-service-environment-{input-type}"] -== Service Environment - -**`service.environment`** - -The service environment for events to match a policy. (string) - -// end::tbs-policy[] -:!input-type: diff --git a/docs/configure/shared/input-apm.asciidoc b/docs/configure/shared/input-apm.asciidoc deleted file mode 100644 index 2f3b13904ba..00000000000 --- a/docs/configure/shared/input-apm.asciidoc +++ /dev/null @@ -1,8 +0,0 @@ - -// tag::fleet-managed-settings[] -Configure and customize Fleet-managed APM settings directly in {kib}: - -. Open {kib} and navigate to **{fleet}**. -. Under the **Agent policies** tab, select the policy you would like to configure. -. Find the Elastic APM integration and select **Actions** > **Edit integration**. -// end::fleet-managed-settings[] diff --git a/docs/configure/tab-widgets/anon-auth-widget.asciidoc b/docs/configure/tab-widgets/anon-auth-widget.asciidoc deleted file mode 100644 index 16746820e42..00000000000 --- a/docs/configure/tab-widgets/anon-auth-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::anon-auth.asciidoc[tag=binary] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/anon-auth.asciidoc b/docs/configure/tab-widgets/anon-auth.asciidoc deleted file mode 100644 index f8f1a29c117..00000000000 --- a/docs/configure/tab-widgets/anon-auth.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -// tag::binary[] -Example configuration: - -["source","yaml"] ----- -apm-server.auth.anonymous.enabled: true -apm-server.auth.anonymous.allow_agent: [rum-js] -apm-server.auth.anonymous.allow_service: [my_service_name] -apm-server.auth.anonymous.rate_limit.event_limit: 300 -apm-server.auth.anonymous.rate_limit.ip_limit: 1000 ----- -// end::binary[] - -// tag::fleet-managed[] -include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] -+ -. Look for these settings under **Agent authorization**. -// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/auth-config-widget.asciidoc b/docs/configure/tab-widgets/auth-config-widget.asciidoc deleted file mode 100644 index d81425414e7..00000000000 --- a/docs/configure/tab-widgets/auth-config-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::auth-config.asciidoc[tag=binary] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/auth-config.asciidoc b/docs/configure/tab-widgets/auth-config.asciidoc deleted file mode 100644 index fd256c3124b..00000000000 --- a/docs/configure/tab-widgets/auth-config.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -// tag::binary[] -**Example config file:** - -[source,yaml] ----- -apm-server: - host: "localhost:8200" - rum: - enabled: true - -output: - elasticsearch: - hosts: ElasticsearchAddress:9200 - -max_procs: 4 ----- -// end::binary[] - -// tag::fleet-managed[] -include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] -+ -. Look for these settings under **Agent authorization**. -// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/general-config-widget.asciidoc b/docs/configure/tab-widgets/general-config-widget.asciidoc deleted file mode 100644 index c543b4e77e4..00000000000 --- a/docs/configure/tab-widgets/general-config-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::general-config.asciidoc[tag=binary] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/general-config.asciidoc b/docs/configure/tab-widgets/general-config.asciidoc deleted file mode 100644 index 8c34c7eca81..00000000000 --- a/docs/configure/tab-widgets/general-config.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -// tag::binary[] -**Example config file:** - -[source,yaml] ----- -apm-server: - host: "localhost:8200" - rum: - enabled: true - -max_procs: 4 ----- -// end::binary[] - -// tag::fleet-managed[] -include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] -+ -. Look for these settings under **General**. -// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/rum-config-widget.asciidoc b/docs/configure/tab-widgets/rum-config-widget.asciidoc deleted file mode 100644 index 192121fc5b7..00000000000 --- a/docs/configure/tab-widgets/rum-config-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::rum-config.asciidoc[tag=binary] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/rum-config.asciidoc b/docs/configure/tab-widgets/rum-config.asciidoc deleted file mode 100644 index 9e194624aca..00000000000 --- a/docs/configure/tab-widgets/rum-config.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -// tag::binary[] -To enable RUM support, set `apm-server.rum.enabled` to `true` in your APM Server configuration file. - -Example config: - -["source","yaml"] ----- -apm-server.rum.enabled: true -apm-server.auth.anonymous.rate_limit.event_limit: 300 -apm-server.auth.anonymous.rate_limit.ip_limit: 1000 -apm-server.auth.anonymous.allow_service: [your_service_name] -apm-server.rum.allow_origins: ['*'] -apm-server.rum.allow_headers: ["header1", "header2"] -apm-server.rum.library_pattern: "node_modules|bower_components|~" -apm-server.rum.exclude_from_grouping: "^/webpack" -apm-server.rum.source_mapping.enabled: true -apm-server.rum.source_mapping.cache.expiration: 5m -apm-server.rum.source_mapping.elasticsearch.api_key: TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA ----- -// end::binary[] - -// tag::fleet-managed[] -To enable RUM, set <> to `true`. - -include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] -+ -. Look for these options under **Real User Monitoring**. -// end::fleet-managed[] diff --git a/docs/configure/tab-widgets/sampling-config-widget.asciidoc b/docs/configure/tab-widgets/sampling-config-widget.asciidoc deleted file mode 100644 index 902636efb3d..00000000000 --- a/docs/configure/tab-widgets/sampling-config-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::sampling-config.asciidoc[tag=binary] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/configure/tab-widgets/sampling-config.asciidoc b/docs/configure/tab-widgets/sampling-config.asciidoc deleted file mode 100644 index 2b1a70d0fd4..00000000000 --- a/docs/configure/tab-widgets/sampling-config.asciidoc +++ /dev/null @@ -1,23 +0,0 @@ -// tag::binary[] -**Example config file:** - -[source,yaml] ----- -apm-server: - host: "localhost:8200" - rum: - enabled: true - -output: - elasticsearch: - hosts: ElasticsearchAddress:9200 - -max_procs: 4 ----- -// end::binary[] - -// tag::fleet-managed[] -include::../shared/input-apm.asciidoc[tag=fleet-managed-settings] -+ -. Look for these options under **Tail-based sampling**. -// end::fleet-managed[] diff --git a/docs/configure/tls.asciidoc b/docs/configure/tls.asciidoc deleted file mode 100644 index 217ef7f7f02..00000000000 --- a/docs/configure/tls.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[configuration-ssl-landing]] -= SSL/TLS settings - -SSL/TLS is available for: - -* <> (APM Agents) -* <> that support SSL, like {es}, {ls}, or Kafka. - -Additional information on getting started with SSL/TLS is available in <>. - -// :leveloffset: +2 -include::{docdir}/shared-ssl-config.asciidoc[] -// :leveloffset: -2 - -include::{docdir}/ssl-input-settings.asciidoc[leveloffset=-1] \ No newline at end of file diff --git a/docs/cross-cluster-search.asciidoc b/docs/cross-cluster-search.asciidoc deleted file mode 100644 index 8ae95da9b53..00000000000 --- a/docs/cross-cluster-search.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[[cross-cluster-search]] -=== Cross-cluster search - -Elastic APM utilizes {es}'s cross-cluster search functionality. -Cross-cluster search lets you run a single search request against one or more -{ref}/modules-remote-clusters.html[remote clusters] -- -making it easy to search APM data across multiple sources. -This means you can also have deployments per data type, making sizing and scaling more predictable, -and allowing for better performance while managing multiple observability use cases. - -[float] -[[set-up-cross-cluster-search]] -==== Set up cross-cluster search - -*Step 1. Set up remote clusters.* - -If you're using the Hosted {ess}, see {cloud}/ec-enable-ccs.html[Enable cross-cluster search]. - -// lint ignore elasticsearch -You can add remote clusters directly in {kib}, under *Management* > *Elasticsearch* > *Remote clusters*. -All you need is a name for the remote cluster and the seed node(s). -Remember the names of your remote clusters, you'll need them in step two. -See {ref}/ccr-getting-started.html[managing remote clusters] for detailed information on the setup process. - -Alternatively, you can {ref}/modules-remote-clusters.html#configuring-remote-clusters[configure remote clusters] -in {es}'s `elasticsearch.yml` file. - -*Step 2. Edit the default {apm-app} {data-sources}.* - -{apm-app} {data-sources} determine which clusters and indices to display data from. -{data-sources-cap} follow this convention: `:`. - -To display data from all remote clusters and the local cluster, -duplicate and prepend the defaults with `*:`. -For example, the default {data-source} for Error indices is `logs-apm*,apm*`. -To add all remote clusters, change this to `*:logs-apm*,*:apm*,logs-apm*,apm*` - -You can also specify certain clusters to display data from, for example, -`cluster-one:logs-apm*,cluster-one:apm*,logs-apm*,apm*`. - -There are two ways to edit the default {data-source}: - -* In the {apm-app} -- Navigate to *APM* > *Settings* > *Indices*, and change all `xpack.apm.indices.*` values to -include remote clusters. -* In `kibana.yml` -- Update the {kibana-ref}/apm-settings-kb.html[`xpack.apm.indices.*`] configuration values to -include remote clusters. diff --git a/docs/custom-index-template.asciidoc b/docs/custom-index-template.asciidoc deleted file mode 100644 index aa18b1f16dc..00000000000 --- a/docs/custom-index-template.asciidoc +++ /dev/null @@ -1,109 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -// This content is reused in the Legacy ILM documentation -// ids look like this -// [id="name-name{append-legacy}"] -////////////////////////////////////////////////////////////////////////// - -[[custom-index-template]] -=== View the {es} index template - -:append-legacy: -// tag::index-template-integration[] - -Index templates are used to configure the backing indices of data streams as they are created. -These index templates are composed of multiple component templates--reusable building blocks -that configure index mappings, settings, and aliases. - -The default APM index templates can be viewed in {kib}. -Navigate to **{stack-manage-app}** → **Index Management** → **Index Templates**, and search for `apm`. -Select any of the APM index templates to view their relevant component templates. - -[discrete] -[id="index-template-view{append-legacy}"] -=== Edit the {es} index template - -WARNING: Custom index mappings may conflict with the mappings defined by the APM integration -and may break the APM integration and {apm-app} in {kib}. -Do not change or customize any default mappings. - -When you install the APM integration, {fleet} creates a default `@custom` component template for each data stream. -You can edit this `@custom` component template to customize your {es} indices. - -First, determine which <> you'd like to edit. -Then, open {kib} and navigate to **{stack-manage-app}** → **Index Management** → **Component Templates**. - -Custom component templates are named following this pattern: `@custom`. -Search for the name of the data stream, like `traces-apm`, and select its custom component template. -In this example, that'd be, `traces-apm@custom`. -Then click **Manage** → **Edit**. - -Add any custom metadata, index settings, or mappings. - -[discrete] -[[custom-index-template-index-settings]] -==== Index settings - -In the **Index settings** step, you can specify custom {ref}/index-modules.html#index-modules-settings[index settings]. -For example, you could: - -* Customize the index lifecycle policy applied to a data stream. -See <> for a walk-through. - -* Change the number of {ref}/scalability.html[shards] per index. -Specify the number of primary shards: -+ -[source,json] ----- -{ - "settings": { - "number_of_shards": "4", - } -} ----- - -* Change the number of {ref}/docs-replication.html[replicas] per index. -Specify the number of replica shards: -+ -[source,json] ----- -{ - "index": { - "number_of_replicas": "2" - } -} ----- - -[discrete] -[[custom-index-template-mappings]] -==== Mappings - -{ref}/mapping.html[Mapping] is the process of defining how a document, and the fields it contains, are stored and indexed. -In the *Mappings* step, you can add custom field mappings. -For example, you could: - -* Add custom field mappings that you can index on and search. -In the *Mapped fields* tab, add a new field including the {ref}/mapping-types.html[field type]: -+ -image::images/custom-index-template-mapped-fields.png[Editing a component template to add a new mapped field] - -* Add a {ref}/runtime.html[runtime field] that is evaluated at query time. -In the *Runtime fields* tab, click *Create runtime field* and provide a field name, -type, and optionally a script: -+ -image::images/custom-index-template-runtime-fields.png[Editing a component template to add a new runtime field] - -[discrete] -[[custom-index-template-rollover]] -=== Roll over the data stream - -Changes to component templates are not applied retroactively to existing indices. -For changes to take effect, you must create a new write index for the data stream. -This can be done with the {es} {ref}/indices-rollover-index.html[Rollover API]. -For example, to roll over the `traces-apm-default` data stream, run: - -[source,console] ----- -POST /traces-apm-default/_rollover/ ----- - -// end::index-template-integration[] diff --git a/docs/data-ingestion.asciidoc b/docs/data-ingestion.asciidoc deleted file mode 100644 index cbe7c07cf56..00000000000 --- a/docs/data-ingestion.asciidoc +++ /dev/null @@ -1,72 +0,0 @@ -[[tune-data-ingestion]] -=== Tune data ingestion - -This section explains how to adapt data ingestion according to your needs. - -[float] -[[tune-apm-server]] -=== Tune APM Server - -* <> -* <> -* <> - -[[add-apm-server-instances]] -[float] -==== Add APM Server or {agent} instances - -If the APM Server cannot process data quickly enough, -you will see request timeouts. -One way to solve this problem is to increase processing power. - -Increase processing power by either migrating to a more powerful machine -or adding more APM Server/Elastic Agent instances. -Having several instances will also increase <>. - -[[reduce-payload-size]] -[float] -==== Reduce the payload size - -Large payloads may result in request timeouts. -You can reduce the payload size by decreasing the flush interval in the agents. -This will cause agents to send smaller and more frequent requests. - -Optionally you can also <> or <>. - -Read more in the {apm-agents-ref}/index.html[agents documentation]. - -[[adjust-event-rate]] -[float] -==== Adjust anonymous auth rate limit - -Agents make use of long running requests and flush as many events over a single request as possible. -Thus, the rate limiter for anonymous authentication is bound to the number of _events_ sent per second, per IP. - -If the event rate limit is hit while events on an established request are sent, the request is not immediately terminated. The intake of events is only throttled to anonymous event rate limit, which means that events are queued and processed slower. Only when the allowed buffer queue is also full, does the request get terminated with a `429 - rate limit exceeded` HTTP response. If an agent tries to establish a new request, but the rate limit is already hit, a `429` will be sent immediately. - -Increasing the default value for the following configuration variable will help avoid `rate limit exceeded` errors: - -|==== -| APM Server binary | <> -| Fleet-managed | `Anonymous Event rate limit (event limit)` -|==== - -[float] -[[apm-tune-elasticsearch]] -=== Tune {es} - -The {es} Reference provides insight on tuning {es}. - -{ref}/tune-for-indexing-speed.html[Tune for indexing speed] provides information on: - -* Refresh interval -* Disabling swapping -* Optimizing file system cache -* Considerations regarding faster hardware -* Setting the indexing buffer size - -{ref}/tune-for-disk-usage.html[Tune for disk usage] provides information on: - -* Disabling unneeded features -* Shard size -* Shrink index diff --git a/docs/data-model.asciidoc b/docs/data-model.asciidoc deleted file mode 100644 index 20e0af115df..00000000000 --- a/docs/data-model.asciidoc +++ /dev/null @@ -1,677 +0,0 @@ -:span-name-type-sheet: https://docs.google.com/spreadsheets/d/1SmWeX5AeqUcayrArUauS_CxGgsjwRgMYH4ZY8yQsMhQ/edit#gid=644582948 -:span-spec: https://github.com/elastic/apm/blob/main/tests/agents/json-specs/span_types.json - -[[data-model]] -== Data Model - -Elastic APM agents capture different types of information from within their instrumented applications. -These are known as events, and can be `spans`, `transactions`, `errors`, or `metrics`. - -* <> -* <> -* <> -* <> - -Events can contain additional <> which further enriches your data. - -[[data-model-spans]] -=== Spans - -*Spans* contain information about the execution of a specific code path. -They measure from the start to the end of an activity, -and they can have a parent/child relationship with other spans. - -Agents automatically instrument a variety of libraries to capture these spans from within your application, -but you can also use the Agent API for custom instrumentation of specific code paths. - -Among other things, spans can contain: - -* A `transaction.id` attribute that refers to its parent <>. -* A `parent.id` attribute that refers to its parent span or transaction. -* Its start time and duration. -* A `name`, `type`, `subtype`, and `action`—see the {span-name-type-sheet}[span name/type alignment] -sheet for span name patterns and examples by {apm-agent}. -In addition, some APM agents test against a public {span-spec}[span type/subtype spec]. -* An optional `stack trace`. Stack traces consist of stack frames, -which represent a function call on the call stack. -They include attributes like function name, file name and path, line number, etc. - -TIP: Most agents limit keyword fields, like `span.id`, to 1024 characters, -and non-keyword fields, like `span.start.us`, to 10,000 characters. - -[float] -[[data-model-dropped-spans]] -==== Dropped spans - -For performance reasons, APM agents can choose to sample or omit spans purposefully. -This can be useful in preventing edge cases, like long-running transactions with over 100 spans, -that would otherwise overload both the Agent and the APM Server. -When this occurs, the {apm-app} will display the number of spans dropped. - -To configure the number of spans recorded per transaction, see the relevant Agent documentation: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-max-spans[`ELASTIC_APM_TRANSACTION_MAX_SPANS`] -* iOS: _Not yet supported_ -* Java: {apm-java-ref-v}/config-core.html#config-transaction-max-spans[`transaction_max_spans`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-max-spans[`TransactionMaxSpans`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-max-spans[`transactionMaxSpans`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-max-spans[`transaction_max_spans`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-max-spans[`transaction_max_spans`] - -[float] -[[data-model-missing-spans]] -==== Missing spans - -Agents stream spans to the APM Server separately from their transactions. -Because of this, unforeseen errors may cause spans to go missing. -Agents know how many spans a transaction should have; -if the number of expected spans does not equal the number of spans received by the APM Server, -the {apm-app} will calculate the difference and display a message. - -[float] -==== Data streams - -Spans are stored with transactions in the following data streams: - -include::./data-streams.asciidoc[tag=traces-data-streams] - -See <> to learn more. - -[float] -==== Example span document - -This example shows what span documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== -[source,json] ----- -include::./data/elasticsearch/generated/spans.json[] ----- -==== - -[[data-model-transactions]] -=== Transactions - -*Transactions* are a special kind of <> that have additional attributes associated with them. -They describe an event captured by an Elastic {apm-agent} instrumenting a service. -You can think of transactions as the highest level of work you’re measuring within a service. -As an example, a transaction might be a: - -* Request to your server -* Batch job -* Background job -* Custom transaction type - -Agents decide whether to sample transactions or not, -and provide settings to control sampling behavior. -If sampled, the <> of a transaction are sent and stored as separate documents. -Within one transaction there can be 0, 1, or many spans captured. - -A transaction contains: - -* The timestamp of the event -* A unique id, type, and name -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. -* Other relevant information depending on the agent. Example: The JavaScript RUM agent captures transaction marks, -which are points in time relative to the start of the transaction with some label. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -Transactions are grouped by their `type` and `name` in the APM UI's -{kibana-ref}/transactions.html[Transaction overview]. -If you're using a supported framework, APM agents will automatically handle the naming for you. -If you're not, or if you wish to override the default, -all agents have API methods to manually set the `type` and `name`. - -* `type` should be a keyword of specific relevance in the service's domain, -e.g. `request`, `backgroundjob`, etc. -* `name` should be a generic designation of a transaction in the scope of a single service, -e.g. `GET /users/:id`, `UsersController#show`, etc. - -TIP: Most agents limit keyword fields (e.g. `labels`) to 1024 characters, -non-keyword fields (e.g. `span.db.statement`) to 10,000 characters. - -[float] -==== Data streams - -Transactions are stored with spans in the following data streams: - -include::./data-streams.asciidoc[tag=traces-data-streams] - -See <> to learn more. - -[float] -==== Example transaction document - -This example shows what transaction documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== -[source,json] ----- -include::./data/elasticsearch/generated/transactions.json[] ----- -==== - -[[data-model-errors]] -=== Errors - -An error event contains at least -information about the original `exception` that occurred -or about a `log` created when the exception occurred. -For simplicity, errors are represented by a unique ID. - -An Error contains: - -* Both the captured `exception` and the captured `log` of an error can contain a `stack trace`, -which is helpful for debugging. -* The `culprit` of an error indicates where it originated. -* An error might relate to the <> during which it happened, -via the `transaction.id`. -* Data about the environment in which the event is recorded: -** Service - environment, framework, language, etc. -** Host - architecture, hostname, IP, etc. -** Process - args, PID, PPID, etc. -** URL - full, domain, port, query, etc. -** <> - (if supplied) email, ID, username, etc. - -In addition, agents provide options for users to capture custom <>. -Metadata can be indexed - <>, or not-indexed - <>. - -TIP: Most agents limit keyword fields (e.g. `error.id`) to 1024 characters, -non-keyword fields (e.g. `error.exception.message`) to 10,000 characters. - -Errors are stored in error indices. - -[float] -==== Data streams - -Errors are stored in the following data streams: - -include::./data-streams.asciidoc[tag=logs-data-streams] - -See <> to learn more. - -[float] -==== Example error document - -This example shows what error documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== -[source,json] ----- -include::./data/elasticsearch/generated/errors.json[] ----- -==== - -[[data-model-metrics]] -=== Metrics - -**Metrics** measure the state of a system by gathering information on a regular interval. There are two types of APM metrics: - -* **System metrics**: Basic infrastructure and application metrics. -* **Calculated metrics**: Aggregated trace event metrics used to power visualizations in the {apm-app}. - -[float] -==== System metrics - -APM agents automatically pick up basic host-level metrics, -including system and process-level CPU and memory metrics. -Agent specific metrics are also available, -like {apm-java-ref-v}/metrics.html[JVM metrics] in the Java Agent, -and {apm-go-ref-v}/metrics.html[Go runtime] metrics in the Go Agent. - -Infrastructure and application metrics are important sources of information when debugging production systems, -which is why we've made it easy to filter metrics for specific hosts or containers in the {kib} {kibana-ref}/metrics.html[metrics overview]. - -TIP: Most agents limit keyword fields to 1024 characters, -non-keyword fields (e.g. `system.memory.total`) to 10,000 characters. - -Metrics are stored in metric indices. - -For a full list of tracked metrics, see the relevant agent documentation: - -* {apm-go-ref-v}/metrics.html[Go] -* {apm-java-ref-v}/metrics.html[Java] -* {apm-node-ref-v}/metrics.html[Node.js] -* {apm-py-ref-v}/metrics.html[Python] -* {apm-ruby-ref-v}/metrics.html[Ruby] - -[float] -===== Example system metric document - -This example shows what system metric documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -This example contains JVM metrics produced by the {apm-java-agent}. -and contains two related metrics: `jvm.gc.time` and `jvm.gc.count`. These are accompanied by various fields describing -the environment in which the metrics were captured: service name, host name, Kubernetes pod UID, container ID, process ID, and more. -These fields make it possible to search and aggregate across various dimensions, such as by service, host, and Kubernetes pod. - -[source,json] ----- -include::./data/elasticsearch/metricset.json[] ----- -==== - -[float] -==== Calculated metrics - -APM agents and APM Server calculate metrics from trace events to power visualizations in the {apm-app}. - -Calculated metrics are an implementation detail and while we aim for stability for these data models, -the dimensions and concrete limits for aggregations are subject to change within minor version updates. - -These metrics are described below. - -[float] -===== Breakdown metrics - -To power the {apm-app-ref}/transactions.html[Time spent by span type] graph, -agents collect summarized metrics about the timings of spans and transactions, -broken down by span type. - -*`span.self_time.count`* and *`span.self_time.sum.us`*:: -+ --- -These metrics measure the "self-time" for a span type, and optional subtype, -within a transaction group. Together these metrics can be used to calculate -the average duration and percentage of time spent on each type of operation -within a transaction group. - -These metric documents can be identified by searching for `metricset.name: span_breakdown`. - -You can filter and group by these dimensions: - -* `transaction.name`: The name of the enclosing transaction group, for example `GET /` -* `transaction.type`: The type of the enclosing transaction, for example `request` -* `span.type`: The type of the span, for example `app`, `template` or `db` -* `span.subtype`: The sub-type of the span, for example `mysql` (optional) --- - -[float] -===== Example breakdown metric document - -This example shows what breakdown metric documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -[source,json] ----- -include::./data/elasticsearch/span_breakdown.json[] ----- -==== - -[float] -===== Transaction metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates transaction events into latency distribution metrics. - -*`transaction.duration.summary`* and *`transaction.duration.histogram`*:: -+ --- -These metrics represent the latency summary and latency distribution of transaction groups, -used to power transaction-oriented visualizations and analytics in Elastic APM. - -These metric documents can be identified by searching for `metricset.name: transaction`. - -You can filter and group by these dimensions (some of which are optional, for example `container.id`): - -* `agent.name`: The name of the {apm-agent} that instrumented the transaction, for example `java` -* `cloud.account.id`: The cloud account id of the service that served the transaction -* `cloud.account.name`: The cloud account name of the service that served the transaction -* `cloud.availability_zone`: The cloud availability zone hosting the service instance that served the transaction -* `cloud.machine.type`: The cloud machine type or instance type of the service that served the transaction -* `cloud.project.id`: The cloud project identifier of the service that served the transaction -* `cloud.project.name`: The cloud project name of the service that served the transaction -* `cloud.provider`: The cloud provider hosting the service instance that served the transaction -* `cloud.region`: The cloud region hosting the service instance that served the transaction -* `cloud.service.name`: The cloud service name of the service that served the transaction -* `container.id`: The container ID of the service that served the transaction -* `event.outcome`: The outcome of the transaction, for example `success` -* `faas.coldstart`: Whether the _serverless_ service that served the transaction had a cold start -* `faas.id`: The unique identifier of the invoked serverless function -* `faas.name`: The name of the lambda function -* `faas.trigger.type`: The trigger type that the lambda function was executed by of the service that served the transaction -* `faas.version`: The version of the lambda function -* `host.hostname`: The detected hostname of the service that served the transaction -* `host.name`: The user-defined name of the host or the detected hostname of the service that served the transaction -* `host.os.platform`: The platform name of the service that served the transaction, for example `linux` -* `kubernetes.pod.name`: The name of the Kubernetes pod running the service that served the transaction -* `labels`: Key-value object containing string labels set globally by the APM agents. -* `metricset.interval`: A string with the aggregation interval the metricset represents. -* `numeric_labels`: Key-value object containing numeric labels set globally by the APM agents. -* `service.environment`: The environment of the service that served the transaction -* `service.language.name`: The language name of the service that served the transaction, for example `Go` -* `service.language.version`: The language version of the service that served the transaction -* `service.name`: The name of the service that served the transaction -* `service.node.name`: The name of the service instance that served the transaction -* `service.runtime.name`: The runtime name of the service that served the transaction, for example `jRuby` -* `service.runtime.version`: The runtime version that served the transaction -* `service.version`: The version of the service that served the transaction -* `transaction.name`: The name of the transaction, for example `GET /` -* `transaction.result`: The result of the transaction, for example `HTTP 2xx` -* `transaction.root`: A boolean flag indicating whether the transaction is the root of a trace -* `transaction.type`: The type of the transaction, for example `request` --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -===== Example transaction document - -This example shows what transaction documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -[source,json] ----- -include::./data/elasticsearch/transaction_metric.json[] ----- -==== - -[float] -===== Service-transaction metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates transaction events into service-transaction metrics. -Service-transaction metrics are similar to transaction metrics, but they usually -have a much lower cardinality as they have significantly fewer dimensions. -The UI uses them when fewer details of the transactions are needed. - -*`transaction.duration.summary`* and *`transaction.duration.histogram`*:: -+ --- -These metrics represent the latency summary and latency distribution of service transaction groups, -used to power service-oriented visualizations and analytics in Elastic APM. - -These metric documents can be identified by searching for `metricset.name: service_transaction`. - -You can filter and group by these dimensions: - -* `agent.name`: The name of the {apm-agent} that instrumented the operation, for example `java` -* `labels`: Key-value object containing string labels set globally by the APM agents. -* `metricset.interval`: A string with the aggregation interval the metricset represents. -* `numeric_labels`: Key-value object containing numeric labels set globally by the APM agents. -* `service.environment`: The environment of the service that made the request -* `service.language.name`: The language name of the service that served the transaction, for example `Go` -* `service.name`: The name of the service that made the request -* `transaction.type`: The type of the enclosing transaction, for example `request` --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -===== Example service-transaction document - -This example shows what service-transaction documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -[source,json] ----- -include::./data/elasticsearch/service_transaction_metric.json[] ----- -==== - -[float] -===== Service-destination metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates span events into service-destination metrics. - -*`span.destination.service.response_time.count`* and *`span.destination.service.response_time.sum.us`*:: -+ --- -These metrics measure the count and total duration of requests from one service to another service. -These are used to calculate the throughput and latency of requests to backend services such as databases in -{kibana-ref}/service-maps.html[Service maps]. - -These metric documents can be identified by searching for `metricset.name: service_destination`. - -You can filter and group by these dimensions: - -* `agent.name`: The name of the {apm-agent} that instrumented the operation, for example `java` -* `event.outcome`: The outcome of the operation, for example `success` -* `labels`: Key-value object containing string labels set globally by the APM agents. -* `metricset.interval`: A string with the aggregation interval the metricset represents. -* `numeric_labels`: Key-value object containing numeric labels set globally by the APM agents. -* `service.environment`: The environment of the service that made the request -* `service.language.name`: The language name of the service that served the transaction, for example `Go` -* `service.name`: The name of the service that made the request -* `service.target.name`: The target service name, for example `customer_db` -* `service.target.type`: The target service type, for example `mysql` -* `span.destination.service.resource`: The destination service resource, for example `mysql` -* `span.name`: The name of the operation, for example `SELECT FROM table_name`. --- - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -===== Example service-destination document - -This example shows what service-destination documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -[source,json] ----- -include::./data/elasticsearch/service_destination_metric.json[] ----- -==== - -[float] -===== Service-summary metrics - -To power {kibana-ref}/xpack-apm.html[{apm-app}] visualizations, -APM Server aggregates transaction, error, log, and metric events into service-summary metrics. - -These metric documents can be identified by searching for `metricset.name: service_summary`. - -You can filter and group by these dimensions: - -* `agent.name`: The name of the {apm-agent} that instrumented the operation, for example `java` -* `labels`: Key-value object containing string labels set globally by the APM agents. -* `metricset.interval`: A string with the aggregation interval the metricset represents. -* `numeric_labels`: Key-value object containing numeric labels set globally by the APM agents. -* `service.environment`: The environment of the service that made the request -* `service.language.name`: The language name of the service that served the transaction, for example `Go` -* `service.name`: The name of the service that made the request - -The `@timestamp` field of these documents holds the start of the aggregation interval. - -[float] -===== Example service-summary document - -This example shows what service-summary documents can look like when indexed in {es}. - -[%collapsible] -.Expand {es} document -==== - -[source,json] ----- -include::./data/elasticsearch/service_summary_metric.json[] ----- -==== - -[float] -==== Data streams - -Metrics are stored in the following data streams: - -include::./data-streams.asciidoc[tag=metrics-data-streams] - -See <> to learn more. - -[float] -==== Aggregated metrics: limits and overflows - -For all aggregated metrics, namely transaction, service-transaction, service-destination, and service-summary metrics, -there are limits on the number of unique groups tracked at any given time. - -[float] -===== Limits - -Note that all the below limits may change in the future with further improvements. - -* For all the following metrics, they share a limit of 1000 services per GB of APM Server. -** For transaction metrics, there is an additional limit of 5000 total transaction groups per GB of APM Server, -and each service may only consume up to 10% of the transaction groups, -which is 500 transaction groups per service per GB of APM Server. -** For service-transaction metrics, there is an additional limit of 1000 total service transaction groups per GB of APM Server, -and each service may only consume up to 10% of the service transaction groups, -which is 100 service transaction groups per service per GB of APM Server. -** For service-destination metrics, there is an additional limit of 5000 total service destination groups per GB of APM Server -starting with 10000 service destination groups for 1 GB APM Server, -and each service may only consume up to 10% of the service destination groups, -which is 1000 service destination groups for 1GB APM Server with 500 increment per GB of APM Server. -** For service-summary metrics, there is no additional limit. - -In the above, a service is defined as a combination of `service.name`, `service.environment`, `service.language.name` and `agent.name`. - -[float] -===== Overflows - -When a dimension has a high cardinality and exceeds the limit, the metrics will be aggregated -under a dedicated overflow bucket. - -For example, when `transaction.name` has a lot of unique values and reaches the limit -of unique transaction groups tracked, any transactions with new `transaction.name` will be aggregated under -`transaction.name`: `_other`. - -Another example of how the transaction group limit may be reached is if `transaction.name` contains just a few unique values, -but the service is deployed on a lot of different hosts. As `host.name` is part of the aggregation key for transaction metrics, -the max transaction group limit is reached for a service that has 100 instances, 10 different transaction names, and -4 transaction results per transaction name when connected to an APM Server with 8GB of RAM. -Once this limit is reached, any new combinations of `transaction.name`, `transaction.result`, and -`host.name` for that service will be aggregated under `transaction.name`: `_other`. - -This issue can be resolved by increasing memory available to APM Server, or by ensuring that the dimensions do not use values -that are based on parameters that can change. For example, user ids, product ids, order numbers, query parameters, etc., -should be stripped away from the dimensions. - -// This heading is linked to from the APM UI section in Kibana -[[data-model-metadata]] -=== Metadata - -Metadata can enrich your events and make application performance monitoring even more useful. -Let's explore the different types of metadata that Elastic APM offers. - -[float] -[[data-model-labels]] -==== Labels - -Labels add *indexed* information to transactions, spans, and errors. -Indexed means the data is searchable and aggregatable in {es}. -Add additional key-value pairs to define multiple labels. - -* Indexed: Yes -* {es} type: {ref}/object.html[object] -* {es} field: `labels` -* Applies to: <> | <> | <> - -Label values can be a string, boolean, or number, although some agents only support string values at this time. -Because labels for a given key, regardless of agent used, are stored in the same place in {es}, -all label values of a given key must have the same data type. -Multiple data types per key will throw an exception, for example: `{foo: bar}` and `{foo: 42}` is not allowed. - -IMPORTANT: Avoid defining too many user-specified labels. -Defining too many unique fields in an index is a condition that can lead to a -{ref}/mapping.html#mapping-limit-settings[mapping explosion]. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-label[`SetLabel`] -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-tag[`setLabel`] -* .NET: {apm-dotnet-ref-v}/public-api.html#api-transaction-tags[`Labels`] -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-label[`setLabel`] | {apm-node-ref-v}/agent-api.html#apm-add-labels[`addLabels`] -* PHP: {apm-php-ref}/public-api.html#api-transaction-interface-set-label[`Transaction` `setLabel`] | {apm-php-ref}/public-api.html#api-span-interface-set-label[`Span` `setLabel`] -* Python: {apm-py-ref-v}/api.html#api-label[`elasticapm.label()`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-label[`set_label`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-add-labels[`addLabels`] - -[float] -[[data-model-custom]] -==== Custom context - -Custom context adds *non-indexed*, -custom contextual information to transactions and errors. -Non-indexed means the data is not searchable or aggregatable in {es}, -and you cannot build dashboards on top of the data. -This also means you don't have to worry about {ref}/mapping.html#mapping-limit-settings[mapping explosions], -as these fields are not added to the mapping. - -Non-indexed information is useful for providing contextual information to help you -quickly debug performance issues or errors. - -* Indexed: No -* {es} type: {ref}/object.html[object] -* {es} fields: `transaction.custom` | `error.custom` -* Applies to: <> | <> - -IMPORTANT: Setting a circular object, a large object, or a non JSON serializable object can lead to errors. - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-custom[`SetCustom`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-add-custom-context[`addCustomContext`] -* .NET: _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-custom-context[`set_custom_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-custom-context[`set_custom_context`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-custom-context[`setCustomContext`] - -[float] -[[data-model-user]] -==== User context - -User context adds *indexed* user information to transactions and errors. -Indexed means the data is searchable and aggregatable in {es}. - -* Indexed: Yes -* {es} type: {ref}/keyword.html[keyword] -* {es} fields: `user.email` | `user.name` | `user.id` -* Applies to: <> | <> - -[float] -===== Agent API reference - -* Go: {apm-go-ref-v}/api.html#context-set-username[`SetUsername`] | {apm-go-ref-v}/api.html#context-set-user-id[`SetUserID`] | -{apm-go-ref-v}/api.html#context-set-user-email[`SetUserEmail`] -* iOS: _coming soon_ -* Java: {apm-java-ref-v}/public-api.html#api-transaction-set-user[`setUser`] -* .NET _coming soon_ -* Node.js: {apm-node-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] -* PHP: _coming soon_ -* Python: {apm-py-ref-v}/api.html#api-set-user-context[`set_user_context`] -* Ruby: {apm-ruby-ref-v}/api.html#api-agent-set-user[`set_user`] -* Rum: {apm-rum-ref-v}/agent-api.html#apm-set-user-context[`setUserContext`] diff --git a/docs/data-streams.asciidoc b/docs/data-streams.asciidoc deleted file mode 100644 index acacae9f4de..00000000000 --- a/docs/data-streams.asciidoc +++ /dev/null @@ -1,96 +0,0 @@ -[[apm-data-streams]] -=== Data streams - -**** -{agent} uses data streams to store append-only time series data across multiple indices. -Data streams are well-suited for logs, metrics, traces, and other continuously generated data, -and offer a host of benefits over other indexing strategies: - -* Reduced number of fields per index -* More granular data control -* Flexible naming scheme -* Fewer ingest permissions required - -See the {fleet-guide}/data-streams.html[{fleet} and {agent} Guide] to learn more. -**** - -[discrete] -[[apm-data-streams-naming-scheme]] -=== Data stream naming scheme - -// tag::data-streams[] -APM data follows the `--` naming scheme. -The `type` and `dataset` are predefined by the APM integration, -but the `namespace` is your opportunity to customize how different types of data are stored in {es}. -There is no recommendation for what to use as your namespace--it is intentionally flexible. -For example, you might create namespaces for each of your environments, -like `dev`, `prod`, `production`, etc. -Or, you might create namespaces that correspond to strategic business units within your organization. -// end::data-streams[] - -[discrete] -[[apm-data-streams-list]] -=== APM data streams - -By type, the APM data streams are: - -Traces:: -Traces are comprised of {apm-guide-ref}/data-model.html[spans and transactions]. -Traces are stored in the following data streams: -+ -// tag::traces-data-streams[] -- Application traces: `traces-apm-` -- RUM and iOS agent application traces: `traces-apm.rum-` -// end::traces-data-streams[] - - -Metrics:: -Metrics include application-based metrics, aggregation metrics, and basic system metrics. -Metrics are stored in the following data streams: -+ -// tag::metrics-data-streams[] -- APM internal metrics: `metrics-apm.internal-` -- APM transaction metrics: `metrics-apm.transaction.-` -- APM service destination metrics: `metrics-apm.service_destination.-` -- APM service transaction metrics: `metrics-apm.service_transaction.-` -- APM service summary metrics: `metrics-apm.service_summary.-` -- Application metrics: `metrics-apm.app.-` -// end::metrics-data-streams[] -+ -Application metrics include the instrumented service's name--defined in each {apm-agent}'s -configuration--in the data stream name. -Service names therefore must follow certain index naming rules. -+ -[%collapsible] -.Service name rules -==== -* Service names are case-insensitive and must be unique. -For example, you cannot have a service named `Foo` and another named `foo`. -* Special characters will be removed from service names and replaced with underscores (`_`). -Special characters include: -+ -[source,text] ----- -'\\', '/', '*', '?', '"', '<', '>', '|', ' ', ',', '#', ':', '-' ----- -==== - - -Logs:: -Logs include application error events and application logs. -Logs are stored in the following data streams: -+ -// tag::logs-data-streams[] -- APM error/exception logging: `logs-apm.error-` -- APM app logging: `logs-apm.app.-` -// end::logs-data-streams[] - -[discrete] -[[apm-data-streams-next]] -=== What's next? - -* Data streams define not only how data is stored in {es}, but also how data is retained over time. -See <> to learn how to create your own data retention policies. - -* See <> for information on APM storage and processing costs, -processing and performance, and other index management features. diff --git a/docs/debugging.asciidoc b/docs/debugging.asciidoc deleted file mode 100644 index 65d18bcec77..00000000000 --- a/docs/debugging.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/debugging.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[enable-apm-server-debugging]] -=== Enable APM Server binary debugging - -++++ -APM Server binary debugging -++++ - -NOTE: Fleet-managed users should see {fleet-guide}/monitor-elastic-agent.html[View {agent} logs] -to learn how to view logs and change the logging level of {agent}. - -By default, {beatname_uc} sends all its output to syslog. When you run {beatname_uc} in -the foreground, you can use the `-e` command line flag to redirect the output to -standard error instead. For example: - -["source","sh",subs="attributes"] ------------------------------------------------ -{beatname_lc} -e ------------------------------------------------ - -The default configuration file is {beatname_lc}.yml (the location of the file varies by -platform). You can use a different configuration file by specifying the `-c` flag. For example: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -{beatname_lc} -e -c my{beatname_lc}config.yml ------------------------------------------------------------- - -You can increase the verbosity of debug messages by enabling one or more debug -selectors. For example, to view publisher-related messages, start {beatname_uc} -with the `publisher` selector: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -{beatname_lc} -e -d "publisher" ------------------------------------------------------------- - -If you want all the debugging output (fair warning, it's quite a lot), you can -use `*`, like this: - -["source","sh",subs="attributes"] ------------------------------------------------------------- -{beatname_lc} -e -d "*" ------------------------------------------------------------- \ No newline at end of file diff --git a/docs/diagrams/apm-architecture-central.asciidoc b/docs/diagrams/apm-architecture-central.asciidoc deleted file mode 100644 index 9af897823b4..00000000000 --- a/docs/diagrams/apm-architecture-central.asciidoc +++ /dev/null @@ -1,189 +0,0 @@ -++++ -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-++++ \ No newline at end of file diff --git a/docs/diagrams/apm-architecture-edge.asciidoc b/docs/diagrams/apm-architecture-edge.asciidoc deleted file mode 100644 index 2713fd10a90..00000000000 --- a/docs/diagrams/apm-architecture-edge.asciidoc +++ /dev/null @@ -1,172 +0,0 @@ -++++ -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-++++ \ No newline at end of file diff --git a/docs/diagrams/apm-decision-tree.asciidoc b/docs/diagrams/apm-decision-tree.asciidoc deleted file mode 100644 index f169b8d9340..00000000000 --- a/docs/diagrams/apm-decision-tree.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -++++ -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-++++ \ No newline at end of file diff --git a/docs/diagrams/apm-otel-architecture.asciidoc b/docs/diagrams/apm-otel-architecture.asciidoc deleted file mode 100644 index 42924b17d8c..00000000000 --- a/docs/diagrams/apm-otel-architecture.asciidoc +++ /dev/null @@ -1,259 +0,0 @@ -++++ -
- - - -Elastic Observability - -Kibana Observability apps - -Elasticsearch - -Elastic Agent - -APM Integration - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Edge machines -Protocol - - - - - - -Hosted on Elastic Cloud - - - - - - - - - - - - - - - - - - - - - - - - - - - -API/SDK - -Elastic APM agent -OpenTelemetry API/SDK with Elastic APM agents - - - - - - - - - - - - - -API/SDK -OpenTelemetry Agents - - - - -OTLP Collector -OpenTelemetry Collectors - -Click to see all supported languages -here -Available in Java, .NET, Node.js, and Python - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-++++ \ No newline at end of file diff --git a/docs/exploring-es-data.asciidoc b/docs/exploring-es-data.asciidoc deleted file mode 100644 index 47e80df1225..00000000000 --- a/docs/exploring-es-data.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ -[[exploring-es-data]] -= Explore data in {es} - -* <> - -[float] -[[elasticsearch-query-examples]] -== {es} query examples - -Elastic APM data is stored in <>. - -The following examples enable you to interact with {es}'s REST API. -One possible way to do this is using {kib}'s -{kibana-ref}/console-kibana.html[{dev-tools-app} console]. - -Data streams, templates, and index-level operations can also be manged via {kib}'s -{kibana-ref}/managing-indices.html[Index management] panel. - -To see an overview of existing data streams, run: -["source","sh"] ----- -GET /_data_stream/*apm* ----- -// CONSOLE - -To query a specific event type, for example, application traces: -["source","sh",subs="attributes"] ----- -GET traces-apm*/_search ----- -// CONSOLE - -If you are interested in the _settings_ and _mappings_ of the Elastic APM indices, -first, run a query to find template names: - -["source","sh"] ----- -GET _cat/templates/*apm* ----- -// CONSOLE - -Then, retrieve the specific template you are interested in: -["source","sh"] ----- -GET /_template/your-template-name ----- -// CONSOLE diff --git a/docs/feature-roles.asciidoc b/docs/feature-roles.asciidoc deleted file mode 100644 index d99e70f1cf1..00000000000 --- a/docs/feature-roles.asciidoc +++ /dev/null @@ -1,398 +0,0 @@ -[[secure-comms-stack]] -== Secure communication with the {stack} - -++++ -With the {stack} -++++ - -NOTE: This documentation only applies to the APM Server binary. - -Use role-based access control or API keys to grant APM Server users access to secured resources. - -* <> -* <>. - -After privileged users have been created, use authentication to connect to a secured Elastic cluster. - -* <> -* <> - -For secure communication between APM Server and APM Agents, see <>. - -A reference of all available <> is also available. - -[float] -[[security-overview]] -=== Security Overview - -APM Server exposes an HTTP endpoint, and as with anything that opens ports on your servers, -you should be careful about who can connect to it. -Firewall rules are recommended to ensure only authorized systems can connect. - -[float] -[[feature-roles]] -=== Feature roles - -You can use role-based access control to grant users access to secured -resources. The roles that you set up depend on your organization's security -requirements and the minimum privileges required to use specific features. - -Typically, you need to create the following separate roles: - -* <>: To publish events collected by {beatname_uc}. -* <>: One for sending monitoring -information, and another for viewing it. -* <>: To create and manage API keys. -* <>: To view -APM Agent central configurations. -* <>: To read RUM source maps. - -{es-security-features} provides {ref}/built-in-roles.html[built-in roles] that grant a -subset of the privileges needed by APM users. -When possible, assign users the built-in roles to minimize the affect of future changes on your security strategy. -If no built-in role is available, you can assign users the privileges needed to accomplish a specific task. -In general, there are three types of privileges you'll work with: - -* **{es} cluster privileges**: Manage the actions a user can perform against your cluster. -* **{es} index privileges**: Control access to the data in specific indices your cluster. -* **{kib} space privileges**: Grant users write or read access to features and apps within {kib}. - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-to-publish-events]] -=== Grant privileges and roles needed for writing events - -++++ -Create a _writer_ user -++++ - -APM users that publish events to {es} need privileges to write to APM data streams. - -[float] -==== General writer role - -To grant an APM user the required privileges for writing events to {es}: - -. Create a *general writer role*, called something like `apm_writer`, -that has the following privileges: -+ -[options="header"] -|==== -|Type | Privilege | Purpose - -|Index -|`auto_configure` on `traces-apm*`, `logs-apm*`, and `metrics-apm*` indices -|Permits auto-creation of indices and data streams - -|Index -|`create_doc` on `traces-apm*`, `logs-apm*`, and `metrics-apm*` indices -|Write events into {es} -|==== - -. Assign the *general writer role* to users who need to publish APM data. - -. If <> is enabled, create a separate <>. - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-to-publish-monitoring]] -=== Grant privileges and roles needed for monitoring - -++++ -Create a _monitoring_ user -++++ - -{es-security-features} provides built-in users and roles for publishing and viewing monitoring data. -The privileges and roles needed to publish monitoring data -depend on the method used to collect that data. - -* <> -** <> -** <> -* <> - -[float] -[[privileges-to-publish-monitoring-write]] -==== Publish monitoring data - -[IMPORTANT] -==== -**{ecloud} users:** This section does not apply to our -https://www.elastic.co/cloud/elasticsearch-service[hosted {ess}]. -Monitoring on {ecloud} is enabled by clicking the *Enable* button in the *Monitoring* panel. -==== - -[float] -[[privileges-to-publish-monitoring-internal]] -===== Internal collection - -If you're using <> to -collect metrics about {beatname_uc}, {security-features} provides -the +{beat_monitoring_user}+ {ref}/built-in-users.html[built-in user] and -+{beat_monitoring_user}+ {ref}/built-in-roles.html[built-in role] to send -monitoring information. You can use the built-in user, if it's available in your -environment, or create a user who has the built-in role assigned, -or create a user and manually assign the privileges needed to send monitoring -information. - -If you use the built-in +{beat_monitoring_user}+ user, -make sure you set the password before using it. - -If you don't use the +{beat_monitoring_user}+ user: - --- -. Create a *monitoring role*, called something like -+{beat_default_index_prefix}_monitoring_writer+, that has the following privileges: -+ -[options="header"] -|==== -|Type | Privilege | Purpose - -|Index -|`create_index` on `.monitoring-beats-*` indices -|Create monitoring indices in {es} - -|Index -|`create_doc` on `.monitoring-beats-*` indices -|Write monitoring events into {es} -|==== -+ -. Assign the *monitoring role* to users who need to write monitoring data to {es}. --- - -[float] -[[privileges-to-publish-monitoring-metricbeat]] -===== {metricbeat} collection - -NOTE: When using {metricbeat} to collect metrics, -no roles or users need to be created with APM Server. -See <> -for complete details on setting up {metricbeat} collection. - -If you're <> to collect -metrics about {beatname_uc}, {security-features} provides the `remote_monitoring_user` -{ref}/built-in-users.html[built-in user], and the `remote_monitoring_collector` -and `remote_monitoring_agent` {ref}/built-in-roles.html[built-in roles] for -collecting and sending monitoring information. You can use the built-in user, if -it's available in your environment, or create a user who has the privileges -needed to collect and send monitoring information. - -If you use the built-in `remote_monitoring_user` user, -make sure you set the password before using it. - -If you don't use the `remote_monitoring_user` user: - --- -. Create a *monitoring user* on the production cluster who will collect and send monitoring -information. Assign the following roles to the *monitoring user*: -+ -[options="header"] -|==== -|Role | Purpose - -|`remote_monitoring_collector` -|Collect monitoring metrics from {beatname_uc} - -|`remote_monitoring_agent` -|Send monitoring data to the monitoring cluster -|==== --- - -[float] -[[privileges-to-publish-monitoring-view]] -==== View monitoring data - -To grant users the required privileges for viewing monitoring data: - -. Create a *monitoring role*, called something like -+{beat_default_index_prefix}_monitoring_viewer+, that has the following privileges: -+ -[options="header"] -|==== -|Type | Privilege | Purpose - -| Spaces -|`Read` on Stack monitoring -|Read-only access to the {stack-monitor-app} feature in {kib}. - -| Spaces -|`Read` on Dashboards -|Read-only access to the Dashboards feature in {kib}. -|==== -+ -. Assign the *monitoring role*, along with the following built-in roles, to users who -need to view monitoring data for {beatname_uc}: -+ -[options="header"] -|==== -|Role | Purpose - -|`monitoring_user` -|Grants access to monitoring indices for {beatname_uc} -|==== - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-api-key]] -=== Grant privileges and roles needed for API key management - -++++ -Create an _API key_ user -++++ - -You can configure <> to authorize requests to APM Server. -To create an APM Server user with the required privileges for creating and managing API keys: - -. Create an **API key role**, called something like `apm_api_key`, -that has the following `cluster` level privileges: -+ -[options="header"] -|==== -| Privilege | Purpose - -|`manage_own_api_key` -|Allow {beatname_uc} to create, retrieve, and invalidate API keys -|==== - -. Depending on what the **API key role** will be used for, -also assign the appropriate `apm` application-level privileges: -+ -* To **receive Agent configuration**, assign `config_agent:read`. -* To **ingest agent data**, assign `event:write`. -* To **upload source maps**, assign `sourcemap:write`. - -. Assign the **API key role** to users that need to create and manage API keys. -Users with this role can only create API keys that have the same or lower access rights. - -[float] -[[privileges-api-key-example]] -=== Example API key role - -The following example assigns the required cluster privileges, -and the ingest agent data `apm` API key application privileges to a role named `apm_api_key`: - -[source,kibana] ----- -PUT _security/role/apm_api_key <1> -{ - "cluster": [ - "manage_own_api_key" <2> - ], - "applications": [ - { - "application": "apm", - "privileges": [ - "event:write" <3> - ], - "resources": [ - "*" - ] - } - ] -} ----- -<1> `apm_api_key` is the name of the role we're assigning these privileges to. Any name can be used. -<2> Required cluster privileges. -<3> Required for API keys that will be used to ingest agent events. - - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -[[privileges-agent-central-config]] -=== Grant privileges and roles needed for APM Agent central configuration - -++++ -Create a _central config_ user -++++ - -[[privileges-agent-central-config-server]] -==== APM Server agent central configuration management - -APM Server acts as a proxy between your APM agents and the {apm-app}. -The {apm-app} communicates any changed settings to APM Server so that your agents only need to poll the Server -to determine which central configuration settings have changed. - -To grant an APM Server user with the required privileges for managing central configuration in {es} without {kib}, -assign the user the following privileges: - -[options="header"] -|==== -|Type | Privilege | Purpose - -| Index -|`read` on `.apm-agent-configuration` index -|Allow {beatname_uc} to manage central configurations in {es} -|==== - -The above privileges should be sufficient for APM agent central configuration to work properly -as long as {beatname_uc} communicates with {es} successfully. -If it fails, it may fallback to read agent central configuration via {kib} if configured, -which requires the following privileges: - -[options="header"] -|==== -|Type | Privilege | Purpose - -| Spaces -|`Read` on {beat_kib_app} -|Allow {beatname_uc} to manage central configurations via the {beat_kib_app} -|==== - -TIP: Looking for privileges and roles needed to use central configuration from the {apm-app} or {apm-app} API? -See {kibana-ref}/apm-app-central-config-user.html[{apm-app} central configuration user]. - -[[privileges-rum-source-map]] -=== Grant privileges and roles needed for reading source maps - -++++ -Create a _source map_ user -++++ - -[[privileges-rum-source-mapping]] -==== APM Server RUM source mapping - -If <> is enabled, additional privileges are required to read source maps. - -To grant an APM Server user with the required privileges for reading RUM source maps from {es} directly without {kib}, -assign the user the following privileges: - -[options="header"] -|==== -|Type | Privilege | Purpose - -|Index -|`read` on `.apm-source-map` index -|Allow {beatname_uc} to read RUM source maps from {es} -|==== - -The above privileges should be sufficient for RUM source mapping to work properly -as long as {beatname_uc} communicates with {es} successfully. -If it fails, it may fallback to read source maps via {kib} if configured, -which requires additional {kib} privileges. -See {kibana-ref}/rum-sourcemap-api.html[RUM source map API] for more details. - -//// -*********************************** *********************************** -*********************************** *********************************** -//// - -// [[privileges-create-api-keys]] -// === Grant privileges and roles needed to create APM Server API keys - -// ++++ -// Create an _APM API key_ user -// ++++ - -// CONTENT diff --git a/docs/features.asciidoc b/docs/features.asciidoc deleted file mode 100644 index 4c58f4c7d3c..00000000000 --- a/docs/features.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[features]] -== Elastic APM features - -++++ -Features -++++ - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::./apm-data-security.asciidoc[] - -include::./apm-distributed-tracing.asciidoc[] - -include::./apm-rum.asciidoc[] - -include::./sampling.asciidoc[] - -include::./log-correlation.asciidoc[] - -include::./cross-cluster-search.asciidoc[] - -include::./span-compression.asciidoc[] - -include::./aws-lambda-extension.asciidoc[leveloffset=+2] - -include::./apm-mutating-webhook.asciidoc[leveloffset=+2] diff --git a/docs/getting-started-apm-server.asciidoc b/docs/getting-started-apm-server.asciidoc deleted file mode 100644 index a1f673c222e..00000000000 --- a/docs/getting-started-apm-server.asciidoc +++ /dev/null @@ -1,540 +0,0 @@ -[[getting-started-apm-server]] -== Self manage APM Server - -++++ -Self manage APM Server -++++ - -TIP: The easiest way to get started with Elastic APM is by using our -{ess-product}[hosted {es} Service] on {ecloud}. -The {es} Service is available on AWS, GCP, and Azure. -See <> to get started in minutes. - -// TODO: MOVE THIS -IMPORTANT: Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates, -{ilm-init} policies, and ingest pipelines. APM Server will only send data to {es} _after_ the APM integration has been installed. - -The APM Server receives performance data from your APM agents, -validates and processes it, and then transforms the data into {es} documents. -If you're on this page, then you've chosen to self-manage the Elastic Stack, -and you now must decide how to run and configure the APM Server. -There are two options, and the components required are different for each: - -* **<>** -* **<>** -// * **<>** - -[float] -[[setup-apm-server-binary]] -=== APM Server binary - -Install, configure, and run the APM Server binary wherever you need it. - -image::./images/bin-ov.png[APM Server binary overview] - -**Pros**: - -- Simplest self-managed option -- No addition component knowledge required -- YAML configuration simplifies automation - -**Supported outputs**: - -- {es} -- {ess} -- {ls} -- Kafka -- Redis -- File -- Console - -**Required components**: - -- APM agents -- APM Server -- {stack} - -**Configuration method**: YAML - -[float] -[[setup-fleet-managed-apm]] -=== Fleet-managed APM Server - -Fleet is a web-based UI in {kib} that is used to centrally manage {agent}s. -In this deployment model, use {agent} to spin up APM Server instances that can be centrally-managed in a custom-curated user interface. - -NOTE: Fleet-managed APM Server does not have full feature parity with the APM Server binary method of running Elastic APM. - -image::./images/fm-ov.png[APM Server fleet overview] - -// (outputs, stable APIs) -// not the best option for a simple test setup or if only interested in centrally running APM Server - -**Pros**: - -- Conveniently manage one, some, or many different integrations from one central {fleet} UI. - -**Supported outputs**: - -- {es} -- {ess} - -**Required components**: - -- APM agents -- APM Server -- {agent} -- Fleet Server -- {stack} - -**Configuration method**: {kib} UI - -// [float] -// [[setup-apm-server-ea]] -// === Standalone Elastic Agent-managed APM Server -// // I really don't know how to sell this option -// Instead of installing and configuring the APM Server binary, let {agent} orchestrate it for you. -// Install {agent} and manually configure the agent locally on the system where it's installed. -// You are responsible for managing and upgrading {agent}. This approach is recommended for advanced users only. - -// **Pros**: - -// - Easily add integrations for other data sources -// useful if EA already in place for other integrations, and customers want to customize setup rather than using Fleet for configuration -// // TODO: -// // maybe get some more hints on this one from the EA team to align with highlighting the same pros & cons. - -// **Available on Elastic Cloud**: ❌ - -// This supports all of the same outputs as binary -// see https://github.com/elastic/apm-server/issues/10467 -// **Supported outputs**: - -// **Configuration method**: YAML - -// image::./images/ea-ov.png[APM Server ea overview] - -// @simitt's notes for how to include EA-managed in the decision tree: -// **** -// If we generally describe Standalone Elastic Agent managed APM Server then we should also add it to this diagram: -// Do you want to use other integrations? -// -> yes: Would you like to use the comfort of Fleet UI based management? -> yes: Fleet managed APM Server; -> no: Standalone Elastic Agent managed APM Server -// -> no: What is your prefered way of configuration? -> yaml: APM Server binary; -> Kibana UI: Fleet managed APM Server -// **** - -// Components required: - -// [options="header"] -// |==== -// | Installation method | APM Server | Elastic Agent | Fleet Server -// | APM Server binary | ✔️ | | -// // | Standalone Elastic Agent-managed APM Server | ✔️ | ✔️ | -// | Fleet-managed APM Server | ✔️ | ✔️ | ✔️ -// |==== - -[float] -=== Help me decide - -Use the decision tree below to help determine which method of configuring and running the APM Server is best for your use case. - -[subs=attributes+] -include::{docdir}/diagrams/apm-decision-tree.asciidoc[APM Server decision tree] - - -=== APM Server binary - -This guide will explain how to set up and configure the APM Server binary. - -[float] -==== Prerequisites - -// tag::prereq[] -First, see the https://www.elastic.co/support/matrix[Elastic Support Matrix] for information about supported operating systems and product compatibility. - -You'll need: - -* *{es}* for storing and indexing data. -* *{kib}* for visualizing with the APM UI. - -We recommend you use the same version of {es}, {kib}, and APM Server. -See {stack-ref}/installing-elastic-stack.html[Installing the {stack}] -for more information about installing these products. -// end::prereq[] - -image::images/apm-architecture-diy.png[Install Elastic APM yourself] - -// ******************************************************* -// STEP 1 -// ******************************************************* - -[[installing]] -==== Step 1: Install - -NOTE: *Before you begin*: If you haven't installed the {stack}, do that now. -See {stack-ref}/installing-elastic-stack.html[Learn how to install the -{stack} on your own hardware]. - -To download and install {beatname_uc}, use the commands below that work with your system. -If you use `apt` or `yum`, you can <> -to update to the newest version more easily. - -ifeval::["{release-state}"!="unreleased"] -See our https://www.elastic.co/downloads/apm[download page] -for other installation options, such as 32-bit images. -endif::[] - -[[deb]] -*deb:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-amd64.deb -sudo dpkg -i apm-server-{version}-amd64.deb ----------------------------------------------------------------------- - -endif::[] - -[[rpm]] -*RPM:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ----------------------------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-x86_64.rpm -sudo rpm -vi apm-server-{version}-x86_64.rpm ----------------------------------------------------------------------- - -endif::[] - -[[linux]] -*Other Linux:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-linux-x86_64.tar.gz -tar xzvf apm-server-{version}-linux-x86_64.tar.gz ------------------------------------------------- -endif::[] - -[[mac]] -*Mac:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -["source","sh",subs="attributes"] ------------------------------------------------- -curl -L -O {downloads}/apm-server-{version}-darwin-x86_64.tar.gz -tar xzvf apm-server-{version}-darwin-x86_64.tar.gz ------------------------------------------------- - -endif::[] - -[[installing-on-windows]] -*Windows:* - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of APM Server has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -. Download the APM Server Windows zip file from the -https://www.elastic.co/downloads/apm/apm-server[downloads page]. - -. Extract the contents of the zip file into `C:\Program Files`. - -. Rename the `apm-server--windows` directory to `APM-Server`. - -. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select *Run As Administrator*). -If you are running Windows XP, you may need to download and install PowerShell. - -. From the PowerShell prompt, run the following commands to install APM Server as a Windows service: -+ -[source,shell] ----------------------------------------------------------------------- -PS > cd 'C:\Program Files\APM-Server' -PS C:\Program Files\APM-Server> .\install-service.ps1 ----------------------------------------------------------------------- - -NOTE: If script execution is disabled on your system, -you need to set the execution policy for the current session to allow the script to run. -For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service.ps1`. - -endif::[] - -[[docker]] -*Docker:* - -See <> for deploying Docker containers. - -// ******************************************************* -// STEP 2 -// ******************************************************* - -[[apm-server-configuration]] -==== Step 2: Set up and configure - -// This content is reused in the upgrading guide -// tag::why-apm-integration[] -Starting in version 8.0.0, {fleet} uses the APM integration to set up and manage APM index templates, -{ilm-init} policies, and ingest pipelines. APM Server will only send data to {es} _after_ the APM integration has been installed. -// end::why-apm-integration[] - -[float] -===== Install the APM integration - -// This content is reused in the upgrading guide -// tag::install-apm-integration[] -[%collapsible%open] -.**If you have an internet connection** -==== -An internet connection is required to install the APM integration via the {fleet} UI in {kib}. - -// lint ignore elastic-agent -. Open {kib} and select **Add integrations** > **Elastic APM**. -. Click **APM integration**. -. Click **Add Elastic APM**. -. Click **Save and continue**. -. Click **Add Elastic Agent later**. You do not need to run an {agent} to complete the setup. -==== - -// tag::install-apm-integration-no-internet[] -[%collapsible] -.**If you don't have an internet connection** -==== -If your environment has network traffic restrictions, there are other ways to install the APM integration. -See {fleet-guide}/air-gapped.html[Air-gapped environments] for more information. - -Option 1: Update `kibana.yml`:: -+ -Update `kibana.yml` to include the following, then restart {kib}. -+ -[source,yaml] ----- -xpack.fleet.packages: -- name: apm - version: latest ----- -+ -See {kibana-ref}/settings.html[Configure Kibana] to learn more about how to edit the Kibana configuration file. - -Option 2: Use the {fleet} API:: -+ -Use the {fleet} API to install the APM integration. To be successful, this needs to be run against the {kib} -API, not the {es} API. -+ -["source","yaml",subs="attributes"] ----- -POST kbn:/api/fleet/epm/packages/apm/{version} -{ "force": true } ----- -+ -See {kibana-ref}/api.html[Kibana API] to learn more about how to use the Kibana APIs. -==== -// end::install-apm-integration-no-internet[] -// end::install-apm-integration[] - -[float] -===== Configure APM - -Configure APM by editing the `apm-server.yml` configuration file. -The location of this file varies by platform--see the <> for help locating it. - -A minimal configuration file might look like this: - -[source,yaml] ----- -apm-server: - host: "localhost:8200" <1> -output.elasticsearch: - hosts: ["localhost:9200"] <2> - username: "elastic" <3> - password: "changeme" ----- -<1> The `host:port` APM Server listens on. -<2> The {es} `host:port` to connect to. -<3> This example uses basic authentication. -The user provided here needs the privileges required to publish events to {es}. -To create a dedicated user for this role, see <>. - -All available configuration options are outlined in -{apm-guide-ref}/configuring-howto-apm-server.html[configuring APM Server]. - -// ******************************************************* -// STEP 3 -// ******************************************************* - -[[apm-server-starting]] -==== Step 3: Start - -In a production environment, you would put APM Server on its own machines, -similar to how you run {es}. -You _can_ run it on the same machines as {es}, but this is not recommended, -as the processes will be competing for resources. - -To start APM Server, run: - -[source,bash] ----------------------------------- -./apm-server -e ----------------------------------- - -NOTE: The `-e` <> enables logging to stderr and disables syslog/file output. -Remove this flag if you've enabled logging in the configuration file. -For Linux systems, see <>. - -You should see APM Server start up. -It will try to connect to {es} on localhost port `9200` and expose an API to agents on port `8200`. -You can change the defaults in `apm-server.yml` or by supplying a different address on the command line: - -[source,bash] ----------------------------------- -./apm-server -e -E output.elasticsearch.hosts=ElasticsearchAddress:9200 -E apm-server.host=localhost:8200 ----------------------------------- - -[float] -[[running-deb-rpm]] -===== Debian Package / RPM - -For Debian package and RPM installations, we recommend the `apm-server` process runs as a non-root user. -Therefore, these installation methods create an `apm-server` user which you can use to start the process. -In addition, {beatname_uc} will only start if the configuration file is -<>. - -To start the APM Server in this case, run: - -[source,bash] ----------------------------------- -sudo -u apm-server apm-server [] ----------------------------------- - -By default, APM Server loads its configuration file from `/etc/apm-server/apm-server.yml`. -See the <> for a full directory layout. - -// ******************************************************* -// STEP 4 -// ******************************************************* - -[[next-steps]] -==== Step 4: Next steps - -// Use a tagged region to pull APM Agent information from the APM Overview -If you haven't already, you can now install APM Agents in your services! - -* {apm-go-ref-v}/introduction.html[Go agent] -* {apm-ios-ref-v}/intro.html[iOS agent] -* {apm-java-ref-v}/intro.html[Java agent] -* {apm-dotnet-ref-v}/intro.html[.NET agent] -* {apm-node-ref-v}/intro.html[Node.js agent] -* {apm-php-ref-v}/intro.html[PHP agent] -* {apm-py-ref-v}/getting-started.html[Python agent] -* {apm-ruby-ref-v}/introduction.html[Ruby agent] -* {apm-rum-ref-v}/intro.html[JavaScript Real User Monitoring (RUM) agent] - -Once you have at least one {apm-agent} sending data to APM Server, -you can start visualizing your data in the {kibana-ref}/xpack-apm.html[{apm-app}]. - -If you're migrating from Jaeger, see <>. - -// Shared APM & YUM -include::{docdir}/repositories.asciidoc[] - -// Shared docker -include::{docdir}/shared-docker.asciidoc[] - - -=== Fleet-managed APM Server - -This guide will explain how to set up and configure a Fleet-managed APM Server. - -[float] -==== Prerequisites - -You need {es} for storing and searching your data, and {kib} for visualizing and managing it. -When setting these components up, you need: - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/prereq.asciidoc[tag=self-managed] - -==== Step 1: Set up Fleet - -Use {fleet} in {kib} to get APM data into the {stack}. -The first time you use {fleet}, you'll need to set it up and add a -{fleet-server}: - -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/add-fleet-server/content.asciidoc[tag=self-managed] - -For more information, refer to {fleet-guide}/fleet-server.html[{fleet-server}]. - -==== Step 2: Add and configure the APM integration - -include::{obs-repo-dir}/observability/tab-widgets/add-apm-integration/content.asciidoc[tag=self-managed] - -**** -An internet connection is required to install the APM integration via the Fleet UI in Kibana. - --- -include::{docdir}/getting-started-apm-server.asciidoc[tag=install-apm-integration-no-internet] --- -**** - -==== Step 3: Install APM agents - -APM agents are written in the same language as your service. -To monitor a new service, you must install the agent and configure it with a service name, -APM Server host, and Secret token. - -* **Service name**: The APM integration maps an instrumented service's name–defined in each {apm-agent}'s configuration– -to the index that its data is stored in {es}. -Service names are case-insensitive and must be unique. -For example, you cannot have a service named `Foo` and another named `foo`. -Special characters will be removed from service names and replaced with underscores (`_`). - -* **APM Server URL**: The host and port that APM Server listens for events on. -This should match the host and port defined when setting up the APM integration. - -* **Secret token**: Authentication method for {apm-agent} and APM Server communication. -This should match the secret token defined when setting up the APM integration. - -TIP: You can edit your APM integration settings if you need to change the APM Server URL -or secret token to match your APM agents. - -include::{tab-widget-dir}/install-agents-widget.asciidoc[] - -==== Step 4: View your data - -Back in {kib}, under {observability}, select APM. -You should see application performance monitoring data flowing into the {stack}! - -[role="screenshot"] -image::./images/kibana-apm-sample-data.png[{apm-app} with data] diff --git a/docs/high-availability.asciidoc b/docs/high-availability.asciidoc deleted file mode 100644 index 07f14db747f..00000000000 --- a/docs/high-availability.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -[[high-availability]] -=== High Availability - -To achieve high availability -you can place multiple instances of APM Server behind a regular HTTP load balancer, -for example HAProxy or Nginx. - -The endpoint `/` always returns an `HTTP 200`. -You can configure your load balancer to send HTTP requests to this endpoint -to determine if an APM Server is running. -See <> for more information on that endpoint. - -In case of temporal issues, like unavailable {es} or a sudden high workload, -APM Server does not have an internal queue to buffer requests, -but instead leverages an HTTP request timeout to act as back-pressure. - -If {es} goes down, the APM Server will eventually deny incoming requests. -Both the APM Server and {apm-agent}(s) will issue logs accordingly. - -TIP: Fleet-managed APM Server users might also be interested in {fleet-guide}/fleet-agent-proxy-support.html[Fleet/Agent proxy support]. \ No newline at end of file diff --git a/docs/how-to.asciidoc b/docs/how-to.asciidoc deleted file mode 100644 index f9e553802ec..00000000000 --- a/docs/how-to.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[how-to-guides]] -== How-to guides - -Learn how to perform common APM configuration and management tasks. - -* <> -* <> -* <> -* <> - -include::./source-map-how-to.asciidoc[] - -include::./jaeger-integration.asciidoc[] - -include::./ingest-pipelines.asciidoc[] - -include::./custom-index-template.asciidoc[] diff --git a/docs/https.asciidoc b/docs/https.asciidoc deleted file mode 100644 index e335e57b957..00000000000 --- a/docs/https.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/https.asciidoc[] -//// This content is structured to be included as a whole file. -////////////////////////////////////////////////////////////////////////// - -[float] -[[securing-communication-elasticsearch]] -== Secure communication with {es} - -When sending data to a secured cluster through the `elasticsearch` -output, {beatname_uc} can use any of the following authentication methods: - -* Basic authentication credentials (username and password). -* Token-based API authentication. -* A client certificate. - -Authentication is specified in the {beatname_uc} configuration file: - -* To use *basic authentication*, specify the `username` and `password` settings under `output.elasticsearch`. -For example: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - username: "{beat_default_index_prefix}_writer" <1> - password: "{pwd}" ----------------------------------------------------------------------- -<1> This user needs the privileges required to publish events to {es}. -To create a user like this, see <>. --- - -* To use token-based *API key authentication*, specify the `api_key` under `output.elasticsearch`. -For example: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - api_key: "KnR6yE41RrSowb0kQ0HWoA" <1> ----------------------------------------------------------------------- -<1> This API key must have the privileges required to publish events to {es}. -To create an API key like this, see <>. --- - -[[beats-tls]] -* To use *Public Key Infrastructure (PKI) certificates* to authenticate users, -specify the `certificate` and `key` settings under `output.elasticsearch`. -For example: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - ssl.certificate: "/etc/pki/client/cert.pem" <1> - ssl.key: "/etc/pki/client/cert.key" <2> ----------------------------------------------------------------------- -<1> The path to the certificate for SSL client authentication -<2> The client certificate key --- -+ -These settings assume that the -distinguished name (DN) in the certificate is mapped to the appropriate roles in -the `role_mapping.yml` file on each node in the {es} cluster. For more -information, see {ref}/mapping-roles.html#mapping-roles-file[Using role -mapping files]. -+ -By default, {beatname_uc} uses the list of trusted certificate authorities (CA) from the -operating system where {beatname_uc} is running. If the certificate authority that signed your node certificates -is not in the host system's trusted certificate authorities list, you need -to add the path to the `.pem` file that contains your CA's certificate to the -{beatname_uc} configuration. This will configure {beatname_uc} to use a specific list of -CA certificates instead of the default list from the OS. -+ -Here is an example configuration: -+ --- -["source","yaml",subs="attributes,callouts"] ----------------------------------------------------------------------- -output.elasticsearch: - hosts: ["https://myEShost:9200"] - ssl.certificate_authorities: <1> - - /etc/pki/my_root_ca.pem - - /etc/pki/my_other_ca.pem - ssl.certificate: "/etc/pki/client.pem" <2> - ssl.key: "/etc/pki/key.pem" <3> ----------------------------------------------------------------------- -<1> Specify the path to the local `.pem` file that contains your Certificate -Authority's certificate. This is needed if you use your own CA to sign your node certificates. -<2> The path to the certificate for SSL client authentication -<3> The client certificate key --- -+ -NOTE: For any given connection, the SSL/TLS certificates must have a subject -that matches the value specified for `hosts`, or the SSL handshake fails. -For example, if you specify `hosts: ["foobar:9200"]`, the certificate MUST -include `foobar` in the subject (`CN=foobar`) or as a subject alternative name -(SAN). Make sure the hostname resolves to the correct IP address. If no DNS is available, then -you can associate the IP address with your hostname in `/etc/hosts` -(on Unix) or `C:\Windows\System32\drivers\etc\hosts` (on Windows). - -ifndef::no_dashboards[] -[role="xpack"] -[float] -[[securing-communication-kibana]] -=== Secure communication with the {kib} endpoint - -If you've configured the <>, -you can also specify credentials for authenticating with {kib} under `kibana.setup`. -If no credentials are specified, {kib} will use the configured authentication method -in the {es} output. - -For example, specify a unique username and password to connect to {kib} like this: - --- -["source","yaml",subs="attributes,callouts"] ----- -setup.kibana: - host: "mykibanahost:5601" - username: "{beat_default_index_prefix}_kib_setup" <1> - password: "{pwd}" ----- -<1> This user needs privileges required to set up dashboards -endif::no_dashboards[] --- - -[role="xpack"] -[float] -[[securing-communication-learn-more]] -=== Learn more about secure communication - -More information on sending data to a secured cluster is available in the configuration reference: - -* <> -* <> -ifndef::no_dashboards[] -* <> -endif::no_dashboards[] diff --git a/docs/ilm-how-to.asciidoc b/docs/ilm-how-to.asciidoc deleted file mode 100644 index e960c40c88a..00000000000 --- a/docs/ilm-how-to.asciidoc +++ /dev/null @@ -1,240 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -// This content is reused in the Legacy ILM documentation -////////////////////////////////////////////////////////////////////////// - -[[ilm-how-to]] -=== {ilm-cap} - -:append-legacy: -// tag::ilm-integration[] - -Index lifecycle policies allow you to automate the -lifecycle of your APM indices as they grow and age. -A default policy is applied to each APM data stream, -but can be customized depending on your business needs. - -See {ref}/index-lifecycle-management.html[{ilm-init}: Manage the index lifecycle] to learn more. - -[discrete] -[id="index-lifecycle-policies-default{append-legacy}"] -=== Default policies - -The table below describes the default index lifecycle policy applied to each APM data stream. -Each policy includes a rollover and delete definition: - -* **Rollover**: Using rollover indices prevents a single index from growing too large and optimizes indexing and search performance. Rollover, i.e. writing to a new index, occurs after either an age or size metric is met. -* **Delete**: The delete phase permanently removes the index after a time threshold is met. - -[cols="1,1,1,1",options="header"] -|=== -|Data stream -|Rollover after -|Delete after -|Notes - -| `traces-apm` -| 30 days / 50 GB -| 10 days -| Raw trace event data - -| `traces-apm.rum` -| 30 days / 50 GB -| 90 days -| Raw RUM trace event data, used in the UI - -| `logs-apm.error` -| 30 days / 50 GB -| 10 days -| Error event data - -| `logs-apm.app` -| 30 days / 50 GB -| 10 days -| Logs event data - -| `metrics-apm.app` -| 30 days / 50 GB -| 90 days -| Custom application specific metrics - -| `metrics-apm.internal` -| 30 days / 50 GB -| 90 days -| Common system metrics and language specific metrics (for example, CPU and memory usage) - -| `metrics-apm.service_destination_1m` -| 7 days / 50GB -| 90 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_destination_10m` -| 14 days / 50GB -| 180 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_destination_60m` -| 30 days / 50GB -| 390 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_summary_1m` -| 7 days / 50GB -| 90 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_summary_10m` -| 14 days / 50GB -| 180 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_summary_60m` -| 30 days / 50GB -| 390 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_transaction_1m` -| 7 days / 50GB -| 90 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_transaction_10m` -| 14 days / 50GB -| 180 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.service_transaction_60m` -| 30 days / 50GB -| 390 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.transaction_1m` -| 7 days / 50GB -| 90 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.transaction_10m` -| 14 days / 50GB -| 180 days -| Aggregated transaction metrics powering the APM UI - -| `metrics-apm.transaction_60m` -| 30 days / 50GB -| 390 days -| Aggregated transaction metrics powering the APM UI - -|=== - -The APM index lifecycle policies can be viewed in {kib}. -Navigate to *{stack-manage-app}* / *Index Lifecycle Management*, and search for `apm`. - -TIP: Default {ilm-init} policies can change between minor versions. -This is not considered a breaking change as index management should continually improve and adapt to new features. - -[discrete] -[id="data-streams-custom-policy{append-legacy}"] -=== Configure a custom index lifecycle policy - -When the APM integration is installed, {fleet} creates a default `*@custom` component template for each data stream. -The easiest way to configure a custom index lifecycle policy per data stream is to edit this template. - -This tutorial explains how to apply a custom index lifecycle policy to the `traces-apm` data stream. - -[discrete] -[id="data-streams-custom-one{append-legacy}"] -=== Step 1: View data streams - -The **Data Streams** view in {kib} shows you the data streams, -index templates, and index lifecycle policies associated with a given integration. - -. Navigate to **{stack-manage-app}** > **Index Management** > **Data Streams**. -. Search for `traces-apm` to see all data streams associated with APM trace data. -. In this example, I only have one data stream because I'm only using the `default` namespace. -You may have more if your setup includes multiple namespaces. -+ -[role="screenshot"] -image::images/data-stream-overview.png[Data streams info] - -[discrete] -[id="data-streams-custom-two{append-legacy}"] -=== Step 2: Create an index lifecycle policy - -. Navigate to **{stack-manage-app}** > **Index Lifecycle Policies**. -. Click **Create policy**. - -Name your new policy; For this tutorial, I've chosen `custom-traces-apm-policy`. -Customize the policy to your liking, and when you're done, click **Save policy**. - -[discrete] -[id="data-streams-custom-three{append-legacy}"] -=== Step 3: Apply the index lifecycle policy - -To apply your new index lifecycle policy to the `traces-apm-*` data stream, -edit the `@custom` component template. - -. Click on the **Component Template** tab and search for `traces-apm`. -. Select the `traces-apm@custom` template and click **Manage** > **Edit**. -. Under **Index settings**, set the {ilm-init} policy name created in the previous step: -+ -[source,json] ----- -{ - "lifecycle": { - "name": "custom-traces-apm-policy" - } -} ----- -. Continue to **Review** and ensure your request looks similar to the image below. -If it does, click **Create component template**. -+ -[role="screenshot"] -image::images/create-component-template.png[Create component template] - -[discrete] -[id="data-streams-custom-four{append-legacy}"] -=== Step 4: Roll over the data stream (optional) - -To confirm that the data stream is now using the new index template and {ilm-init} policy, -you can either repeat <>, or navigate to **{dev-tools-app}** and run the following: - -[source,bash] ----- -GET /_data_stream/traces-apm-default <1> ----- -<1> The name of the data stream we've been hacking on appended with your - -The result should include the following: - -[source,json] ----- -{ - "data_streams" : [ - { - ... - "template" : "traces-apm-default", <1> - "ilm_policy" : "custom-traces-apm-policy", <2> - ... - } - ] -} ----- -<1> The name of the custom index template created in step three -<2> The name of the {ilm-init} policy applied to the new component template in step two - -New {ilm-init} policies only take effect when new indices are created, -so you either must wait for a rollover to occur (usually after 30 days or when the index size reaches 50 GB), -or force a rollover using the {ref}/indices-rollover-index.html[{es} rollover API]: - -[source,bash] ----- -POST /traces-apm-default/_rollover/ ----- - -[discrete] -[id="data-streams-custom-policy-namespace{append-legacy}"] -=== Namespace-level index lifecycle policies - -It is also possible to create more granular index lifecycle policies that apply to individual namespaces. -This process is similar to the above tutorial, but includes cloning and modify the existing index template to use -a new `*@custom` component template. - -// end::ilm-integration[] diff --git a/docs/images/agent-settings-migration.png b/docs/images/agent-settings-migration.png deleted file mode 100644 index a1f1c12c124..00000000000 Binary files a/docs/images/agent-settings-migration.png and /dev/null differ diff --git a/docs/images/api-key-copy.png b/docs/images/api-key-copy.png deleted file mode 100644 index d47fc7cd2de..00000000000 Binary files a/docs/images/api-key-copy.png and /dev/null differ diff --git a/docs/images/apm-architecture-cloud.png b/docs/images/apm-architecture-cloud.png deleted file mode 100644 index 6bc7001fb9f..00000000000 Binary files a/docs/images/apm-architecture-cloud.png and /dev/null differ diff --git a/docs/images/apm-architecture-diy.png b/docs/images/apm-architecture-diy.png deleted file mode 100644 index d4e96466081..00000000000 Binary files a/docs/images/apm-architecture-diy.png and /dev/null differ diff --git a/docs/images/apm-distributed-tracing.png b/docs/images/apm-distributed-tracing.png deleted file mode 100644 index 7d51e273f9d..00000000000 Binary files a/docs/images/apm-distributed-tracing.png and /dev/null differ diff --git a/docs/images/apm-ui-api-key.png b/docs/images/apm-ui-api-key.png deleted file mode 100644 index eae7ab18296..00000000000 Binary files a/docs/images/apm-ui-api-key.png and /dev/null differ diff --git a/docs/images/assets.png b/docs/images/assets.png deleted file mode 100644 index d3a8e6ea61a..00000000000 Binary files a/docs/images/assets.png and /dev/null differ diff --git a/docs/images/bin-ov.png b/docs/images/bin-ov.png deleted file mode 100644 index 7702dd7d765..00000000000 Binary files a/docs/images/bin-ov.png and /dev/null differ diff --git a/docs/images/config-layer.png b/docs/images/config-layer.png deleted file mode 100644 index ec6c045d347..00000000000 Binary files a/docs/images/config-layer.png and /dev/null differ diff --git a/docs/images/create-component-template.png b/docs/images/create-component-template.png deleted file mode 100644 index cd9c18a19a4..00000000000 Binary files a/docs/images/create-component-template.png and /dev/null differ diff --git a/docs/images/custom-index-template-mapped-fields.png b/docs/images/custom-index-template-mapped-fields.png deleted file mode 100644 index cfd92d21744..00000000000 Binary files a/docs/images/custom-index-template-mapped-fields.png and /dev/null differ diff --git a/docs/images/custom-index-template-runtime-fields.png b/docs/images/custom-index-template-runtime-fields.png deleted file mode 100644 index a792bcc164b..00000000000 Binary files a/docs/images/custom-index-template-runtime-fields.png and /dev/null differ diff --git a/docs/images/data-flow.png b/docs/images/data-flow.png deleted file mode 100644 index 294ff7597d4..00000000000 Binary files a/docs/images/data-flow.png and /dev/null differ diff --git a/docs/images/data-stream-overview.png b/docs/images/data-stream-overview.png deleted file mode 100644 index 503661862de..00000000000 Binary files a/docs/images/data-stream-overview.png and /dev/null differ diff --git a/docs/images/dt-sampling-example-1.png b/docs/images/dt-sampling-example-1.png deleted file mode 100644 index a3def0c7bfa..00000000000 Binary files a/docs/images/dt-sampling-example-1.png and /dev/null differ diff --git a/docs/images/dt-sampling-example-2.png b/docs/images/dt-sampling-example-2.png deleted file mode 100644 index d7f87bcd891..00000000000 Binary files a/docs/images/dt-sampling-example-2.png and /dev/null differ diff --git a/docs/images/dt-sampling-example-3.png b/docs/images/dt-sampling-example-3.png deleted file mode 100644 index a0045705a0c..00000000000 Binary files a/docs/images/dt-sampling-example-3.png and /dev/null differ diff --git a/docs/images/dt-trace-ex1.png b/docs/images/dt-trace-ex1.png deleted file mode 100644 index ca97955ee8b..00000000000 Binary files a/docs/images/dt-trace-ex1.png and /dev/null differ diff --git a/docs/images/dt-trace-ex2.png b/docs/images/dt-trace-ex2.png deleted file mode 100644 index 3df0827f586..00000000000 Binary files a/docs/images/dt-trace-ex2.png and /dev/null differ diff --git a/docs/images/dt-trace-ex3.png b/docs/images/dt-trace-ex3.png deleted file mode 100644 index 1bb666b030a..00000000000 Binary files a/docs/images/dt-trace-ex3.png and /dev/null differ diff --git a/docs/images/ecommerce-dashboard.png b/docs/images/ecommerce-dashboard.png deleted file mode 100644 index f68dc3cc568..00000000000 Binary files a/docs/images/ecommerce-dashboard.png and /dev/null differ diff --git a/docs/images/fm-ov.png b/docs/images/fm-ov.png deleted file mode 100644 index 7aace8b2873..00000000000 Binary files a/docs/images/fm-ov.png and /dev/null differ diff --git a/docs/images/kibana-apm-sample-data.png b/docs/images/kibana-apm-sample-data.png deleted file mode 100644 index 7aeb5f1ac37..00000000000 Binary files a/docs/images/kibana-apm-sample-data.png and /dev/null differ diff --git a/docs/images/layers.png b/docs/images/layers.png deleted file mode 100644 index a8c508a1c74..00000000000 Binary files a/docs/images/layers.png and /dev/null differ diff --git a/docs/images/scale-apm.png b/docs/images/scale-apm.png deleted file mode 100644 index 5792ba4680a..00000000000 Binary files a/docs/images/scale-apm.png and /dev/null differ diff --git a/docs/images/schema-agent.png b/docs/images/schema-agent.png deleted file mode 100644 index 8e65de97cfb..00000000000 Binary files a/docs/images/schema-agent.png and /dev/null differ diff --git a/docs/images/server-api-key-create.png b/docs/images/server-api-key-create.png deleted file mode 100644 index d21c440b19a..00000000000 Binary files a/docs/images/server-api-key-create.png and /dev/null differ diff --git a/docs/images/source-map-after.png b/docs/images/source-map-after.png deleted file mode 100644 index feec9e7c231..00000000000 Binary files a/docs/images/source-map-after.png and /dev/null differ diff --git a/docs/images/source-map-before.png b/docs/images/source-map-before.png deleted file mode 100644 index a92baef141e..00000000000 Binary files a/docs/images/source-map-before.png and /dev/null differ diff --git a/docs/index.asciidoc b/docs/index.asciidoc deleted file mode 100644 index 0513156cb20..00000000000 --- a/docs/index.asciidoc +++ /dev/null @@ -1,4 +0,0 @@ -// This file exists to keep the current build working. -// Delete this file when the APM Server Reference is no longer built in main and 7.16 - -include::legacy/index.asciidoc[] diff --git a/docs/ingest-pipelines.asciidoc b/docs/ingest-pipelines.asciidoc deleted file mode 100644 index 05cfcac9d26..00000000000 --- a/docs/ingest-pipelines.asciidoc +++ /dev/null @@ -1,69 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -// This content is reused in the Legacy ingest pipeline -////////////////////////////////////////////////////////////////////////// - -[[ingest-pipelines]] -=== Parse data using ingest pipelines - -:append-legacy: -// tag::ingest-pipelines[] - -Ingest pipelines preprocess and enrich APM documents before indexing them. -For example, a pipeline might define one processor that removes a field, -one that transforms a field, and another that renames a field. - -The default APM pipelines are defined in index templates that {fleet} loads into {es}. -{es} then uses the index pattern in these index templates to match pipelines to APM data streams. - -[discrete] -[id="custom-ingest-pipelines{append-legacy}"] -=== Custom ingest pipelines - -The Elastic APM integration supports custom ingest pipelines. -A custom pipeline allows you to transform data to better match your specific use case. -This can be useful, for example, to ensure data security by removing or obfuscating sensitive information. - -Each data stream ships with a default pipeline. -This default pipeline calls an initially non-existent and non-versioned "`@custom`" ingest pipeline. -If left uncreated, this pipeline has no effect on your data. However, if utilized, -this pipeline can be used for custom data processing, adding fields, sanitizing data, and more. - -In addition, ingest pipelines can also be used to direct application metrics (`metrics-apm.app.*`) to a data stream with a different dataset, e.g. to combine metrics for two applications. -Sending other APM data to alternate data streams, like traces (`traces-apm.*`), logs (`logs-apm.*`), and internal metrics (`metrics-apm.internal*`) is not currently supported. - -[discrete] -[id="custom-ingest-pipeline-naming{append-legacy}"] -=== `@custom` ingest pipeline naming convention - -// tag::ingest-pipeline-naming[] -`@custom` pipelines are specific to each data stream and follow a similar naming convention: `-@custom`. -As a reminder, the default APM data streams are: - -include::./data-streams.asciidoc[tag=traces-data-streams] -include::./data-streams.asciidoc[tag=metrics-data-streams] -include::./data-streams.asciidoc[tag=logs-data-streams] - -To match a custom ingest pipeline with a data stream, follow the `-@custom` template, -or replace `-namespace` with `@custom` in the table above. -For example, to target application traces, you'd create a pipeline named `traces-apm@custom`. -// end::ingest-pipeline-naming[] - -The `@custom` pipeline can directly contain processors or you can use the -pipeline processor to call other pipelines that can be shared across multiple data streams or integrations. -The `@custom` pipeline will persist across all version upgrades. - -[discrete] -[id="custom-ingest-pipeline-create{append-legacy}"] -=== Create a `@custom` ingest pipeline - -The process for creating a custom ingest pipeline is as follows: - -* Create a pipeline with processors specific to your use case -* Add the newly created pipeline to an `@custom` pipeline that matches an APM data stream - -If you prefer more guidance, see one of these tutorials: - -* <> — An APM-specific tutorial where you learn how to obfuscate passwords stored in the `http.request.body.original` field. -* {fleet-guide}/data-streams-pipeline-tutorial.html[Transform data with custom ingest pipelines] — A basic Elastic integration tutorial where you learn how to add a custom field to incoming data. - -// end::ingest-pipelines[] \ No newline at end of file diff --git a/docs/integrations-index.asciidoc b/docs/integrations-index.asciidoc deleted file mode 100644 index 9a5596a8ea7..00000000000 --- a/docs/integrations-index.asciidoc +++ /dev/null @@ -1,91 +0,0 @@ -include::./version.asciidoc[] -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -:apm-integration-docs: -:obs-repo-dir: {observability-docs-root}/docs/en -:tab-widget-dir: {docdir}/tab-widgets - -:github_repo_link: https://github.com/elastic/apm-server/blob/v{version} -ifeval::["{version}" == "8.0.0"] -:github_repo_link: https://github.com/elastic/apm-server/blob/main -endif::[] - - -// OTHER ATTRS -// TODO: Check that these are still relevant -:version: {apm_server_version} -:beatname_lc: apm-server -:beatname_uc: APM Server -:beatname_pkg: {beatname_lc} -:beat_kib_app: APM app -:beat_monitoring_user: apm_system -:beat_monitoring_user_version: 6.5.0 -:beat_monitoring_version: 6.5 -:beat_default_index_prefix: apm -:access_role: {beat_default_index_prefix}_user -:beat_version_key: observer.version -:dockerimage: docker.elastic.co/apm/{beatname_lc}:{version} -:dockergithub: https://github.com/elastic/apm-server-docker/tree/{doc-branch} -:dockerconfig: https://raw.githubusercontent.com/elastic/apm-server/{doc-branch}/apm-server.docker.yml -:discuss_forum: apm -:github_repo_name: apm-server -:sample_date_0: 2019.10.20 -:sample_date_1: 2019.10.21 -:sample_date_2: 2019.10.22 -:repo: apm-server -:no_kibana: -:no_ilm: -:no-pipeline: -:no-processors: -:no-indices-rules: -:no_dashboards: -:apm-server: -:deb_os: -:rpm_os: -:mac_os: -:docker_platform: -:win_os: -:linux_os: - -:downloads: https://artifacts.elastic.co/downloads/apm-server - -// END OTHER ATTRS - -[[apm-user-guide]] -= APM User Guide - -include::apm-overview.asciidoc[] - -include::apm-quick-start.asciidoc[] - -include::{docdir}/getting-started-apm-server.asciidoc[] - -include::data-model.asciidoc[] - -include::features.asciidoc[] - -include::how-to.asciidoc[] - -include::open-telemetry.asciidoc[] - -include::manage-storage.asciidoc[] - -include::configure/index.asciidoc[leveloffset=+1] - -include::{docdir}/setting-up-and-running.asciidoc[] - -include::secure-comms.asciidoc[] - -include::monitor-apm-server.asciidoc[] - -include::api.asciidoc[] - -include::troubleshoot-apm.asciidoc[] - -include::upgrading.asciidoc[] - -include::release-notes.asciidoc[leveloffset=+1] - -include::known-issues.asciidoc[leveloffset=+1] - -include::{docdir}/redirects.asciidoc[] diff --git a/docs/jaeger-integration.asciidoc b/docs/jaeger-integration.asciidoc deleted file mode 100644 index 1af29f18cf8..00000000000 --- a/docs/jaeger-integration.asciidoc +++ /dev/null @@ -1,112 +0,0 @@ -[[jaeger-integration]] -=== Jaeger integration - -++++ -Integrate with Jaeger -++++ - -Elastic APM integrates with https://www.jaegertracing.io/[Jaeger], an open-source, distributed tracing system. -This integration allows users with an existing Jaeger setup to switch from the default Jaeger backend, -to the {stack}. -Best of all, no instrumentation changes are needed in your application code. - -[float] -[[jaeger-architecture]] -=== Supported architecture - -Jaeger architecture supports different data formats and transport protocols -that define how data can be sent to a collector. Elastic APM, as a Jaeger collector, -supports communication with *Jaeger agents* via gRPC. - -* The APM integration serves Jaeger gRPC over the same host and port as the Elastic {apm-agent} protocol. - -* The APM integration gRPC endpoint supports TLS. If SSL is configured, -SSL settings will automatically be applied to the APM integration's Jaeger gRPC endpoint. - -* The gRPC endpoint supports probabilistic sampling. -Sampling decisions can be configured <> with {apm-agent} central configuration, or <> in each Jaeger client. - -See the https://www.jaegertracing.io/docs/1.27/architecture[Jaeger docs] -for more information on Jaeger architecture. - -[float] -[[get-started-jaeger]] -=== Get started - -Connect your preexisting Jaeger setup to Elastic APM in three steps: - -* <> -* <> -* <> - -IMPORTANT: There are <> to this integration. - -[float] -[[configure-agent-client-jaeger]] -==== Configure Jaeger agents - -The APM integration serves Jaeger gRPC over the same host and port as the Elastic {apm-agent} protocol. - -include::{tab-widget-dir}/jaeger-widget.asciidoc[] - -[float] -[[configure-sampling-jaeger]] -==== Configure Sampling - -The APM integration supports probabilistic sampling, which can be used to reduce the amount of data that your agents collect and send. -Probabilistic sampling makes a random sampling decision based on the configured sampling value. -For example, a value of `.2` means that 20% of traces will be sampled. - -There are two different ways to configure the sampling rate of your Jaeger agents: - -* <> -* <> - -[float] -[[configure-sampling-central-jaeger]] -===== {apm-agent} central configuration (default) - -Central sampling, with {apm-agent} central configuration, -allows Jaeger clients to poll APM Server for the sampling rate. -This means sample rates can be configured on the fly, on a per-service and per-environment basis. -See {kibana-ref}/agent-configuration.html[Central configuration] to learn more. - -[float] -[[configure-sampling-local-jaeger]] -===== Local sampling in each Jaeger client - -If you don't have access to the {apm-app}, -you'll need to change the Jaeger client's `sampler.type` and `sampler.param`. -This enables you to set the sampling configuration locally in each Jaeger client. -See the official https://www.jaegertracing.io/docs/1.27/sampling/[Jaeger sampling documentation] -for more information. - -[float] -[[configure-start-jaeger]] -==== Start sending data - -That's it! Data sent from Jaeger clients to the APM Server can now be viewed in the {apm-app}. - -[float] -[[caveats-jaeger]] -=== Caveats - -There are some limitations and differences between Elastic APM and Jaeger that you should be aware of. - -*Jaeger integration limitations:* - -* Because Jaeger has its own trace context header, and does not currently support W3C trace context headers, -it is not possible to mix and match the use of Elastic's APM agents and Jaeger's clients. -* Elastic APM only supports probabilistic sampling. - -*Differences between APM Agents and Jaeger Clients:* - -* Jaeger clients only sends trace data. -APM agents support a larger number of features, like -multiple types of metrics, and application breakdown charts. -When using Jaeger, features like this will not be available in the {apm-app}. -* Elastic APM's <> is different than Jaegers. -For Jaeger trace data to work with Elastic's data model, we rely on spans being tagged with the appropriate -https://github.com/opentracing/specification/blob/master/semantic_conventions.md[`span.kind`]. -** Server Jaeger spans are mapped to Elastic APM <>. -** Client Jaeger spans are mapped to Elastic APM <> -- unless the span is the root, in which case it is mapped to an Elastic APM <>. diff --git a/docs/keystore.asciidoc b/docs/keystore.asciidoc deleted file mode 100644 index 06822a4af1a..00000000000 --- a/docs/keystore.asciidoc +++ /dev/null @@ -1,109 +0,0 @@ -[[keystore]] -=== Secrets keystore for secure settings - -++++ -Secrets keystore -++++ - -IMPORTANT: The APM Server keystore only applies to the APM Server binary installation method. - -When you configure APM Server, you might need to specify sensitive settings, -such as passwords. Rather than relying on file system permissions to protect -these values, you can use the APM Server keystore to securely store secret -values for use in configuration settings. - -After adding a key and its secret value to the keystore, you can use the key in -place of the secret value when you configure sensitive settings. - -The syntax for referencing keys is identical to the syntax for environment -variables: - -`${KEY}` - -Where KEY is the name of the key. - -For example, imagine that the keystore contains a key called `ES_PWD` with the -value `yourelasticsearchpassword`: - -* In the configuration file, use `output.elasticsearch.password: "${ES_PWD}"` -* On the command line, use: `-E "output.elasticsearch.password=\${ES_PWD}"` - -When APM Server unpacks the configuration, it resolves keys before resolving -environment variables and other variables. - -Notice that the APM Server keystore differs from the {es} keystore. -Whereas the {es} keystore lets you store `elasticsearch.yml` values by -name, the APM Server keystore lets you specify arbitrary names that you can -reference in the APM Server configuration. - -To create and manage keys, use the `keystore` command. -See the <> for the full command syntax, -including optional flags. - -NOTE: The `keystore` command must be run by the same user who will run -APM Server. - -[discrete] -[[creating-keystore]] -=== Create a keystore - -To create a secrets keystore, use: - -[source,sh] ------ -apm-server keystore create ------ - -APM Server creates the keystore in the directory defined by the `path.data` -configuration setting. - -[discrete] -[[add-keys-to-keystore]] -=== Add keys - -To store sensitive values, such as authentication credentials for {es}, -use the `keystore add` command: - -[source,sh] ------ -apm-server keystore add ES_PWD ------ - -When prompted, enter a value for the key. - -To overwrite an existing key's value, use the `--force` flag: - -[source,sh] ------ -apm-server keystore add ES_PWD --force ------ - -To pass the value through stdin, use the `--stdin` flag. You can also use -`--force`: - -[source,sh] ------ -cat /file/containing/setting/value | apm-server keystore add ES_PWD --stdin --force ------ - -[discrete] -[[list-settings]] -=== List keys - -To list the keys defined in the keystore, use: - -[source,sh] ------ -apm-server keystore list ------ - -[discrete] -[[remove-settings]] -=== Remove keys - -To remove a key from the keystore, use: - -[source,sh] ------ -apm-server keystore remove ES_PWD ------ diff --git a/docs/known-issues.asciidoc b/docs/known-issues.asciidoc deleted file mode 100644 index 69c06df05d3..00000000000 --- a/docs/known-issues.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -[[known-issues]] -= Known issues - -APM has the following known issues: - -*Ingesting new JVM metrics in 8.9 and 8.10 breaks upgrade to 8.11 and stops ingestion* + -_APM Server versions: 8.11.0, 8.11.1_ + -_Elastic APM Java Agent versions: 1.39.0+_ - -// Describe the conditions in which this issue occurs -If you're using the Elastic APM Java Agent v1.39.0+ to send new JVM metrics to APM Server v8.9.x and v8.10.x, -// Describe the behavior of the issue -upgrading to 8.11.0 or 8.11.1 will silently fail and stop ingesting APM metrics. -// Describe why it happens -// This happens because... - -// Include exact error messages linked to this issue -// so users searching for the error message end up here. -After upgrading, you will see the following errors: - -* APM Server error logs: -+ -[source,txt] ----- -failed to index document in 'metrics-apm.internal-default' (fail_processor_exception): Document produced by APM Server v8.11.1, which is newer than the installed APM integration (v8.10.3-preview-1695284222). The APM integration must be upgraded. ----- - -* Fleet error on integration package upgrade: -+ -[source,txt] ----- -Failed installing package [apm] due to error: [ResponseError: mapper_parsing_exception - Root causes: - mapper_parsing_exception: Field [jvm.memory.non_heap.pool.committed] attempted to shadow a time_series_metric] ----- - -// Link to fix? -A fix was released in 8.11.2: https://github.com/elastic/kibana/pull/171712[elastic/kibana#171712]. - - -// TEMPLATE - -//// -*Brief description* + -_Versions: XX.XX.XX, YY.YY.YY, ZZ.ZZ.ZZ_ - -Detailed description. -//// diff --git a/docs/legacy/tab-widgets/install-agents-widget.asciidoc b/docs/legacy/tab-widgets/install-agents-widget.asciidoc deleted file mode 100644 index 5042f469611..00000000000 --- a/docs/legacy/tab-widgets/install-agents-widget.asciidoc +++ /dev/null @@ -1 +0,0 @@ -include::../../tab-widgets/install-agents-widget.asciidoc[] \ No newline at end of file diff --git a/docs/log-correlation.asciidoc b/docs/log-correlation.asciidoc deleted file mode 100644 index c091e7e833b..00000000000 --- a/docs/log-correlation.asciidoc +++ /dev/null @@ -1,183 +0,0 @@ -[[log-correlation]] -=== Logging integration - -Many applications use logging frameworks to help record, format, and append an application's logs. -Elastic APM now offers a way to make your application logs even more useful, -by integrating with the most popular logging frameworks in their respective languages. -This means you can easily inject trace information into your logs, -allowing you to explore logs in the {observability-guide}/monitor-logs.html[{logs-app}], -then jump straight into the corresponding APM traces -- all while preserving the trace context. - -To get started: - -. <> -. <> -. <> - -[float] -[[enable-log-correlation]] -==== Enable Log correlation - -Some Agents require you to first enable log correlation in the Agent. -This is done with a configuration variable, and is different for each Agent. -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] for further information. - -// Not enough of the Agent docs are ready yet. -// Commenting these out and will replace when ready. -// * *Java*: {apm-java-ref-v}/config-logging.html#config-enable-log-correlation[`enable_log_correlation`] -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -[[add-apm-identifiers-to-logs]] -==== Add APM identifiers to your logs - -Once log correlation is enabled, -you must ensure your logs contain APM identifiers. - -In some supported frameworks, this is already done for you. -In other scenarios, like for unstructured logs, -you'll need to add APM identifiers to your logs in any easy to parse manner. - -Log correlation relies on these fields: - -- Service level: {ecs-ref}/ecs-service.html[`service.name`], {ecs-ref}/ecs-service.html[`service.version`], and {ecs-ref}/ecs-service.html[`service.environment`] -- Trace level: {ecs-ref}/ecs-tracing.html[`trace.id`] and {ecs-ref}/ecs-tracing.html[`transaction.id`] -- Container level: {ecs-ref}/ecs-container.html[`container.id`] when {ecs-ref}/ecs-service.html[`service.name`] is not available - -The process for adding these fields will differ based the Agent you're using, the logging framework, -and the type and structure of your logs. - -See the relevant https://www.elastic.co/guide/en/apm/agent/index.html[Agent documentation] to learn more. - -// Not enough of the Agent docs have been backported yet. -// Commenting these out and will replace when ready. -// * *Go*: {apm-go-ref-v}/supported-tech.html#supported-tech-logging[Logging frameworks] -// * *Java*: {apm-java-ref-v}/[] NOT merged yet https://github.com/elastic/apm-agent-java/pull/854 -// * *.NET*: {apm-dotnet-ref-v}/[] -// * *Node.js*: {apm-node-ref-v}/[] -// * *Python*: {apm-py-ref-v}/[] -// * *Ruby*: {apm-ruby-ref-v}/[] Not backported yet https://www.elastic.co/guide/en/apm/agent/ruby/master/log-correlation.html -// * *Rum*: {apm-rum-ref-v}/[] - -[float] -[[ingest-logs-in-es]] -==== Ingest your logs into {es} - -Once your logs contain the appropriate identifiers (fields), you need to ingest them into {es}. -Luckily, we've got a tool for that -- {filebeat} is Elastic's log shipper. -The {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] -guide will walk you through the setup process. - -Because logging frameworks and formats vary greatly between different programming languages, -there is no one-size-fits-all approach for ingesting your logs into {es}. -The following tips should hopefully get you going in the right direction: - -**Download {filebeat}** - -There are many ways to download and get started with {filebeat}. -Read the {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] guide to determine which is best for you. - -**Configure {filebeat}** - -Modify the {filebeat-ref}/configuring-howto-filebeat.html[`filebeat.yml`] configuration file to your needs. -Here are some recommendations: - -* Set `filebeat.inputs` to point to the source of your logs -* Point {filebeat} to the same {stack} that is receiving your APM data - * If you're using Elastic cloud, set `cloud.id` and `cloud.auth`. - * If your using a manual setup, use `output.elasticsearch.hosts`. - -[source,yml] ----- -filebeat.inputs: -- type: log <1> - paths: <2> - - /var/log/*.log -cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWMNjN2Q3YTllOTYyNTc0Mw==" <3> -cloud.auth: "elastic:YOUR_PASSWORD" <4> ----- -<1> Configures the `log` input -<2> Path(s) that must be crawled to fetch the log lines -<3> Used to resolve the {es} and {kib} URLs for {ecloud} -<4> Authorization token for {ecloud} - -**JSON logs** - -For JSON logs you can use the {filebeat-ref}/filebeat-input-log.html[`log` input] to read lines from log files. -Here's what a sample configuration might look like: - -[source,yml] ----- -filebeat.inputs: - json.keys_under_root: true <1> - json.add_error_key: true <2> - json.message_key: message <3> ----- -<1> `true` copies JSON keys to the top level in the output document -<2> Tells {filebeat} to add an `error.message` and `error.type: json` key in case of JSON unmarshalling errors -<3> Specifies the JSON key on which to apply line filtering and multiline settings - -**Parsing unstructured logs** - -Consider the following log that is decorated with the `transaction.id` and `trace.id` fields: - -[source,log] ----- -2019-09-18 21:29:49,525 - django.server - ERROR - "GET / HTTP/1.1" 500 27 | elasticapm transaction.id=fcfbbe447b9b6b5a trace.id=f965f4cc5b59bdc62ae349004eece70c span.id=None ----- - -All that's needed now is an {filebeat-ref}/configuring-ingest-node.html[ingest node processor] to preprocess your logs and -extract these structured fields before they are indexed in {es}. -To do this, you'd need to create a pipeline that uses {es}'s {ref}/grok-processor.html[Grok Processor]. -Here's an example: - -[source, json] ----- -PUT _ingest/pipeline/log-correlation -{ - "description": "Parses the log correlation IDs out of the raw plain-text log", - "processors": [ - { - "grok": { - "field": "message", <1> - "patterns": ["%{GREEDYDATA:message} | elasticapm transaction.id=%{DATA:transaction.id} trace.id=%{DATA:trace.id} span.id=%{DATA:span.id}"] <2> - } - } - ] -} ----- -<1> The field to use for grok expression parsing -<2> An ordered list of grok expression to match and extract named captures with: -`%{DATA:transaction.id}` captures the value of `transaction.id`, -`%{DATA:trace.id}` captures the value or `trace.id`, and -`%{DATA:span.id}` captures the value of `span.id`. - -NOTE: Depending on how you've added APM data to your logs, -you may need to tweak this grok pattern in order to work for your setup. -In addition, it's possible to extract more structure out of your logs. -Make sure to follow the {ecs-ref}/ecs-field-reference.html[Elastic Common Schema] -when defining which fields you are storing in {es}. - -Then, configure {filebeat} to use the processor in `filebeat.yml`: - -[source, json] ----- -output.elasticsearch: - pipeline: "log-correlation" ----- - -If your logs contain messages that span multiple lines of text (common in Java stack traces), -you'll also need to configure {filebeat-ref}/multiline-examples.html[multiline settings]. - -The following example shows how to configure {filebeat} to handle a multiline message where the first line of the message begins with a bracket ([). - -[source,yml] ----- -multiline.pattern: '^\[' -multiline.negate: true -multiline.match: after ----- diff --git a/docs/manage-storage.asciidoc b/docs/manage-storage.asciidoc deleted file mode 100644 index 8ceada70096..00000000000 --- a/docs/manage-storage.asciidoc +++ /dev/null @@ -1,210 +0,0 @@ -[[manage-storage]] -== Manage storage - -{agent} uses <> to store time series data across multiple indices. -Each data stream ships with a customizable <> that automates data retention as your indices grow and age. - -The <> attempts to define a "typical" storage reference for Elastic APM, -and there are additional settings you can tweak to <>, -or to <>. - -In addition, the APM UI makes it easy to visualize your APM data usage with -{kibana-ref}/storage-explorer.html[storage explorer]. -Storage explorer allows you to analyze the storage footprint of each of your services to see -which are producing large amounts of data--so you can better reduce the data you're collecting -or forecast and prepare for future storage needs. - -include::./data-streams.asciidoc[] - -include::./ilm-how-to.asciidoc[] - -[[storage-guide]] -=== Storage and sizing guide - -APM processing and storage costs are largely dominated by transactions, spans, and stack frames. - -* <> describe an event captured by an Elastic {apm-agent} instrumenting a service. -They are the highest level of work being measuring within a service. -* <> belong to transactions. They measure from the start to end of an activity, -and contain information about a specific code path that has been executed. -* *Stack frames* belong to spans. Stack frames represent a function call on the call stack, -and include attributes like function name, file name and path, line number, etc. -Stack frames can heavily influence the size of a span. - -[float] -==== Typical transactions - -Due to the high variability of APM data, it's difficult to classify a transaction as typical. -Regardless, this guide will attempt to classify Transactions as _Small_, _Medium_, or _Large_, -and make recommendations based on those classifications. - -The size of a transaction depends on the language, agent settings, and what services the agent instruments. -For instance, an agent auto-instrumenting a service with a popular tech stack -(web framework, database, caching library, etc.) is more likely to generate bigger transactions. - -In addition, all agents support manual instrumentation. -How little or much you use these APIs will also impact what a typical transaction looks like. - -If your sampling rate is very small, transactions will be the dominate storage cost. - -Here's a speculative reference: - -[options="header"] -|======================================================================= -|Transaction size |Number of Spans |Number of stack frames -|_Small_ |5-10 |5-10 -|_Medium_ |15-20 |15-20 -|_Large_ |30-40 |30-40 -|======================================================================= - -There will always be transaction outliers with hundreds of spans or stack frames, but those are very rare. -Small transactions are the most common. - -[float] -==== Typical storage - -Consider the following typical storage reference. -These numbers do not account for {es} compression. - -* 1 unsampled transaction is **~1 KB** -* 1 span with 10 stack frames is **~4 KB** -* 1 span with 50 stack frames is **~20 KB** -* 1 transaction with 10 spans, each with 10 stack frames is **~50 KB** -* 1 transaction with 25 spans, each with 25 spans is **250-300 KB** -* 100 transactions with 10 spans, each with 10 stack frames, sampled at 90% is **600 KB** - -APM data compresses quite well, so the storage cost in {es} will be considerably less: - -* Indexing 100 unsampled transactions per second for 1 hour results in 360,000 documents. These documents use around **50 MB** of disk space. -* Indexing 10 transactions per second for 1 hour, each transaction with 10 spans, each span with 10 stack frames, results in 396,000 documents. These documents use around **200 MB** of disk space. -* Indexing 25 transactions per second for 1 hour, each transaction with 25 spans, each span with 25 stack frames, results in 2,340,000 documents. These documents use around **1.2 GB** of disk space. - -NOTE: These examples were indexing the same data over and over with minimal variation. Because of that, the compression ratios observed of 80-90% are somewhat optimistic. - -[[reduce-apm-storage]] -=== Reduce storage - -The amount of storage for APM data depends on several factors: -the number of services you are instrumenting, how much traffic the services see, agent and server settings, -and the length of time you store your data. - -Here are some ways you can reduce either the amount of APM data you're ingesting -or the amount of data you're retaining. - -[float] -[[reduce-sample-rate]] -==== Reduce the sample rate - -Distributed tracing can generate a substantial amount of data. -More data can mean higher costs and more noise. -Sampling aims to lower the amount of data ingested and the effort required to analyze that data. - -See <> to learn more. - -[float] -==== Enable span compression - -In some cases, APM agents may collect large amounts of very similar or identical spans in a transaction. -These repeated, similar spans often don't provide added benefit, especially if they are of very short duration. -Span compression takes these similar spans and compresses them into a single span-- -retaining important information but reducing processing and storage overhead. - -See <> to learn more. - -[float] -[[reduce-stacktrace]] -==== Reduce collected stack trace information - -Elastic APM agents collect `stacktrace` information under certain circumstances. -This can be very helpful in identifying issues in your code, -but it also comes with an overhead at collection time and increases the storage usage. - -Stack trace collection settings are managed in each agent. - -[float] -==== Delete data - -You might want to only keep data for a defined time period. -This might mean deleting old documents periodically, -deleting data collected for specific services or customers, -or deleting specific indices. - -Depending on your use case, you can delete data: - -* periodically with <> -* <> -* with the <> - -If you want to delete data for security or privacy reasons, see <>. - -[float] -[[delete-data-with-ilm]] -===== Delete data with {ilm} ({ilm-init}) - -Index lifecycle management enables you to automate how you want to manage your indices over time. -You can base actions on factors such as shard size and performance requirements. -See <> to learn more. - -[float] -[[delete-data-query]] -===== Delete data matching a query - -You can delete all APM documents matching a specific query with the {ref}/docs-delete-by-query.html[Delete By Query API]. -For example, to delete all documents with a given `service.name`, use the following request: - -["source","console"] ----- -POST /.ds-*-apm*/_delete_by_query -{ - "query": { - "term": { - "service.name": { - "value": "old-service-name" - } - } - } -} ----- - -[float] -[[delete-data-in-kibana]] -===== Delete data with {kib} Index Management - -{kib}'s {ref}/index-mgmt.html[Index Management] allows you to manage your cluster's -indices, data streams, index templates, and much more. - -In {kib}, navigate to **Stack Management** > **Index Management** > **Data Streams**. -Select the data streams you want to delete, and click **Delete data streams**. - -[float] -[[update-data]] -==== Update existing data - -You might want to update documents that are already indexed. -For example, if you your service name was set incorrectly. - -To do this, you can use the {ref}/docs-update-by-query.html[Update By Query API]. -To rename a service, send the following request: - -["source","sh"] ------------------------------------------------------------- -POST /.ds-*-apm*/_update_by_query?expand_wildcards=all -{ - "query": { - "term": { - "service.name": { - "value": "current-service-name" - } - } - }, - "script": { - "source": "ctx._source.service.name = 'new-service-name'", - "lang": "painless" - } -} ------------------------------------------------------------- -// CONSOLE - -TIP: Remember to also change the service name in the {apm-agents-ref}/index.html[{apm-agent} configuration]. - -include::{docdir}/exploring-es-data.asciidoc[leveloffset=+2] diff --git a/docs/metadata-api.asciidoc b/docs/metadata-api.asciidoc deleted file mode 100644 index 45c69529ed8..00000000000 --- a/docs/metadata-api.asciidoc +++ /dev/null @@ -1,66 +0,0 @@ -[[metadata-api]] -=== Metadata - -Every new connection to the APM Server starts with a `metadata` stanza. -This provides general metadata concerning the other objects in the stream. - -Rather than send this metadata information from the agent multiple times, -the APM Server hangs on to this information and applies it to other objects in the stream as necessary. - -TIP: Metadata is stored under `context` when viewing documents in {es}. - -* <> -* <> - -[[kubernetes-data]] -[float] -==== Kubernetes data - -APM agents automatically read Kubernetes data and send it to the APM Server. -In most instances, agents are able to read this data from inside the container. -If this is not the case, or if you wish to override this data, you can set environment variables for the agents to read. -These environment variable are set via the Kubernetes https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables[Downward API]. -Here's how you would add the environment variables to your Kubernetes pod spec: - -[source,yaml] ----- - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: KUBERNETES_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: KUBERNETES_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: KUBERNETES_POD_UID - valueFrom: - fieldRef: - fieldPath: metadata.uid ----- - -The table below maps these environment variables to the APM metadata event field: - -[options="header"] -|===== -|Environment variable |Metadata field name -| `KUBERNETES_NODE_NAME` |system.kubernetes.node.name -| `KUBERNETES_POD_NAME` |system.kubernetes.pod.name -| `KUBERNETES_NAMESPACE` |system.kubernetes.namespace -| `KUBERNETES_POD_UID` |system.kubernetes.pod.uid -|===== - -[[metadata-schema]] -[float] -==== Metadata Schema - -APM Server uses JSON Schema to validate requests. The specification for metadata is defined on -{github_repo_link}/docs/spec/v2/metadata.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/metadata.json[] ----- \ No newline at end of file diff --git a/docs/metricset-api.asciidoc b/docs/metricset-api.asciidoc deleted file mode 100644 index 4a42e2e4b3a..00000000000 --- a/docs/metricset-api.asciidoc +++ /dev/null @@ -1,16 +0,0 @@ -[[metricset-api]] -=== Metrics - -Metrics contain application metric data captured by an {apm-agent}. - -[[metricset-schema]] -[float] -==== Metric Schema - -APM Server uses JSON Schema to validate requests. The specification for metrics is defined on -{github_repo_link}/docs/spec/v2/metricset.json[GitHub] and included below: - -[source,json] ----- -include::../spec/v2/metricset.json[] ----- diff --git a/docs/monitor-apm-server.asciidoc b/docs/monitor-apm-server.asciidoc deleted file mode 100644 index 34b75242ad5..00000000000 --- a/docs/monitor-apm-server.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[monitor-apm]] -== Monitor APM Server - -++++ -Monitor -++++ - -Use the {stack} {monitor-features} to gain insight into the real-time health and performance of APM Server. -Stack monitoring exposes key metrics, like intake response count, intake error rate, output event rate, -output failed event rate, and more. - -Select your deployment method to get started: - -* <> -* <> -* <> - -[float] -[[monitor-apm-cloud]] -=== {ecloud} - -{ecloud} manages the installation and configuration of a monitoring agent for you -- so -all you have to do is flip a switch and watch the data pour in. - -* **{ess}** user? See {cloud}/ec-enable-logging-and-monitoring.html[ESS: Enable logging and monitoring]. -* **{ece}** user? See {ece-ref}/ece-enable-logging-and-monitoring.html[ECE: Enable logging and monitoring]. - - -include::./monitor.asciidoc[] - -include::{docdir}/monitoring/monitoring-beats.asciidoc[leveloffset=+2] diff --git a/docs/monitor.asciidoc b/docs/monitor.asciidoc deleted file mode 100644 index fa74864e8b9..00000000000 --- a/docs/monitor.asciidoc +++ /dev/null @@ -1,193 +0,0 @@ -[[monitor-apm-self-install]] -=== Monitor a Fleet-managed APM Server - -++++ -Fleet-managed -++++ - -NOTE: This guide assumes you are already ingesting APM data into the {stack}. - -In 8.0 and later, you can use {metricbeat} to collect data about APM Server and ship it to a monitoring cluster. -To collect and ship monitoring data: - -. <> -. <> - -[float] -[[configure-ea-monitoring-data]] -==== Configure {agent} to send monitoring data - -**** -Before you can monitor APM, -you must have monitoring data for the {es} production cluster. -To learn how, see {ref}/configuring-metricbeat.html[Collect {es} monitoring data with {metricbeat}]. -Alternatively, open the **{stack-monitor-app}** app in {kib} and follow the in-product guide. -**** - -. Enable monitoring of {agent} by adding the following settings to your `elastic-agent.yml` configuration file: -+ --- -[source,yaml] ----- -agent.monitoring: - http: - enabled: true <1> - host: localhost <2> - port: 6791 <3> ----- -<1> Enable monitoring -<2> The host to expose logs/metrics on -<3> The port to expose logs/metrics on --- - -. Enroll {agent} -+ -After editing `elastic-agent.yml`, you must re-enroll {agent} for the changes to take effect. -+ --- -include::{ingest-docs-root}/docs/en/ingest-management/commands.asciidoc[tag=enroll] --- - -See the {fleet-guide}/elastic-agent-cmd-options.html[{agent} command reference] for more information on the enroll command. - -[float] -[[install-config-metricbeat]] -==== Install and configure {metricbeat} to collect monitoring data - -. Install {metricbeat} on the same server as {agent}. To learn how, see -{metricbeat-ref}/metricbeat-installation-configuration.html[Get started with {metricbeat}]. -If you already have {metricbeat} installed, skip this step. - -. Enable the `beat-xpack` module in {metricbeat}. -+ --- -For example, to enable the default configuration in the `modules.d` directory, -run the following command, using the correct command syntax for your OS: - -["source","sh",subs="attributes,callouts"] ----- -metricbeat modules enable beat-xpack ----- - -For more information, see -{metricbeat-ref}/configuration-metricbeat.html[Configure modules] and -{metricbeat-ref}/metricbeat-module-beat.html[beat module]. --- - -. Configure the `beat-xpack` module in {metricbeat}. -+ --- -When complete, your `modules.d/beat-xpack.yml` file should look similar to this: - -[source,yaml] ----- -- module: beat - xpack.enabled: true - period: 10s - hosts: ["http://localhost:6791"] - basepath: "/processes/apm-server-default" - username: remote_monitoring_user - password: your_password ----- - -.. Do not change the `module` name or `xpack.enabled` boolean; -these are required for stack monitoring. We recommend accepting the default `period` for now. - -.. Set the `hosts` to match the host:port configured in your `elastic-agent.yml` file. -In this example, that's `http://localhost:6791`. -+ -To monitor multiple APM Server instances running in multiple {agent}s, specify a list of hosts, for example: -+ -[source,yaml] ----- -hosts: ["http://localhost:5066","http://localhost:5067","http://localhost:5068"] ----- -+ -If you configured {agent} to use encrypted communications, you must access -it via HTTPS. For example, use a `hosts` setting like `https://localhost:5066`. - -.. APM Server metrics are exposed at `/processes/apm-server-default`. Add this location as the `basepath`. - -.. Set the `username` and `password` settings as required by your -environment. If Elastic {security-features} are enabled, you must provide a username -and password so that {metricbeat} can collect metrics successfully: - -... Create a user on the {es} cluster that has the -`remote_monitoring_collector` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. - -... Add the `username` and `password` settings to the beat module configuration -file. --- - -. Optional: Disable the system module in the {metricbeat}. -+ --- -By default, the {metricbeat-ref}/metricbeat-module-system.html[system module] is -enabled. The information it collects, however, is not shown on the -*{stack-monitor-app}* page in {kib}. Unless you want to use that information for -other purposes, run the following command: - -["source","sh",subs="attributes,callouts"] ----- -metricbeat modules disable system ----- --- - -. Identify where to send the monitoring data. + -+ --- -TIP: In production environments, you should send your deployment logs and metrics to a dedicated -monitoring deployment (referred to as the _monitoring cluster_). -Monitoring indexes logs and metrics into {es} and these indexes consume storage, memory, -and CPU cycles like any other index. -By using a separate monitoring deployment, you avoid affecting your other production deployments and can -view the logs and metrics even when a production deployment is unavailable. - -For example, specify the {es} output information in the {metricbeat} -configuration file (`metricbeat.yml`): - -[source,yaml] ----- -output.elasticsearch: - # Array of hosts to connect to. - hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> - - # Optional protocol and basic auth credentials. - #protocol: "https" - #api_key: "id:api_key" <2> - #username: "elastic" - #password: "changeme" ----- -<1> In this example, the data is stored on a monitoring cluster with nodes -`es-mon-1` and `es-mon-2`. -<2> Specify one of `api_key` or `username`/`password`. - -If you configured the monitoring cluster to use encrypted communications, you -must access it via HTTPS. For example, use a `hosts` setting like -`https://es-mon-1:9200`. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one ingest node. - -If the {es} {security-features} are enabled on the monitoring cluster, you -must provide a valid user ID and password so that {metricbeat} can send metrics -successfully: - -.. Create a user on the monitoring cluster that has the -`remote_monitoring_agent` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. - -.. Add the `username` and `password` settings to the {es} output information in -the {metricbeat} configuration file. - -For more information about these configuration options, see -{metricbeat-ref}/elasticsearch-output.html[Configure the {es} output]. --- - -. {metricbeat-ref}/metricbeat-starting.html[Start {metricbeat}] to begin -collecting APM monitoring data. - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/monitoring/monitoring-beats.asciidoc b/docs/monitoring/monitoring-beats.asciidoc deleted file mode 100644 index 690e3bd8e5b..00000000000 --- a/docs/monitoring/monitoring-beats.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[monitoring]] -= Monitor the APM Server binary - -++++ -APM Server binary -++++ - -There are two methods to monitor the APM Server binary. -Make sure monitoring is enabled on your {es} cluster, -then configure one of these methods to collect {beatname_uc} metrics: - -* <> - Internal -collectors send monitoring data directly to your monitoring cluster. -ifndef::serverless[] -* <> - -{metricbeat} collects monitoring data from your {beatname_uc} instance -and sends it directly to your monitoring cluster. -* <> - Local collection sends -select monitoring data directly to the standard indices of your monitoring -cluster. -endif::[] - -include::monitoring-internal-collection.asciidoc[] -include::monitoring-local-collection.asciidoc[] - -ifndef::serverless[] -include::monitoring-metricbeat.asciidoc[] -endif::[] diff --git a/docs/monitoring/monitoring-internal-collection.asciidoc b/docs/monitoring/monitoring-internal-collection.asciidoc deleted file mode 100644 index 430fe49c31e..00000000000 --- a/docs/monitoring/monitoring-internal-collection.asciidoc +++ /dev/null @@ -1,125 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/monitoring/monitoring-internal-collection.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[role="xpack"] -[[monitoring-internal-collection]] -== Use internal collection to send monitoring data -++++ -Use internal collection -++++ - -Use internal collectors to send {beats} monitoring data directly to your -monitoring cluster. -ifndef::serverless[] -Or as an alternative to internal collection, use -<>. The benefit of using internal collection -instead of {metricbeat} is that you have fewer pieces of software to install -and maintain. -endif::[] - -//Commenting out this link temporarily until the general monitoring docs can be -//updated. -//To learn about monitoring in general, see -//{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster]. - -. Create an API key or user that has appropriate authority to send system-level monitoring -data to {es}. For example, you can use the built-in +{beat_monitoring_user}+ user or -assign the built-in +{beat_monitoring_user}+ role to another user. For more -information on the required privileges, see <>. -For more information on how to use API keys, see <>. - -. Add the `monitoring` settings in the {beatname_uc} configuration file. If you -configured the {es} output and want to send {beatname_uc} monitoring events to -the same {es} cluster, specify the following minimal configuration: -+ -["source","yml",subs="attributes"] --------------------- -monitoring: - enabled: true - elasticsearch: - api_key: id:api_key <1> - username: {beat_monitoring_user} - password: somepassword --------------------- -<1> Specify one of `api_key` or `username`/`password`. -+ -If you want to send monitoring events to an https://cloud.elastic.co/[{ecloud}] -monitoring cluster, you can use two simpler settings. When defined, these settings -overwrite settings from other parts in the configuration. For example: -+ -[source,yaml] --------------------- -monitoring: - enabled: true - cloud.id: 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==' - cloud.auth: 'elastic:{pwd}' --------------------- -+ -If you -ifndef::no-output-logstash[] -configured a different output, such as {ls} or you -endif::[] -want to send {beatname_uc} monitoring events to a separate {es} cluster -(referred to as the _monitoring cluster_), you must specify additional -configuration options. For example: -+ -["source","yml",subs="attributes"] --------------------- -monitoring: - enabled: true - cluster_uuid: PRODUCTION_ES_CLUSTER_UUID <1> - elasticsearch: - hosts: ["https://example.com:9200", "https://example2.com:9200"] <2> - api_key: id:api_key <3> - username: {beat_monitoring_user} - password: somepassword --------------------- -<1> This setting identifies the {es} cluster under which the -monitoring data for this {beatname_uc} instance will appear in the -{stack-monitor-app} UI. To get a cluster's `cluster_uuid`, -call the `GET /` API against that cluster. -<2> This setting identifies the hosts and port numbers of {es} nodes -that are part of the monitoring cluster. -<3> Specify one of `api_key` or `username`/`password`. -+ -If you want to use PKI authentication to send monitoring events to -{es}, you must specify a different set of configuration options. For -example: -+ -[source,yaml] --------------------- -monitoring: - enabled: true - cluster_uuid: PRODUCTION_ES_CLUSTER_UUID - elasticsearch: - hosts: ["https://example.com:9200", "https://example2.com:9200"] - username: "" - ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] - ssl.certificate: "/etc/pki/client/cert.pem" - ssl.key: "/etc/pki/client/cert.key" --------------------- -+ -You must specify the `username` as `""` explicitly so that -the username from the client certificate (`CN`) is used. See -<> for more information about SSL settings. - -ifndef::serverless[] -. Start {beatname_uc}. -endif::[] - -ifdef::serverless[] -. Deploy {beatname_uc}. -endif::[] - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. - - -include::shared-monitor-config.asciidoc[] diff --git a/docs/monitoring/monitoring-local-collection.asciidoc b/docs/monitoring/monitoring-local-collection.asciidoc deleted file mode 100644 index e4b41b2e9a6..00000000000 --- a/docs/monitoring/monitoring-local-collection.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -[[monitoring-local-collection]] -= Use the select metrics emitted directly to your monitoring cluster -++++ -Use local collection -++++ - -In 8.11 and later, we emit a selected set of metrics directly to the monitoring -cluster. -The benefit of using local collection instead of internal collection is that -the metrics are sent directly to your main monitoring index, making it easier -to view shared data. - -[[select-metrics]] -== The select metrics - -We only ship a select list of metrics, to avoid overwhelming your monitoring cluster. -If you need the entire set of metrics and traces we can expose, you should use -<> instead of local -collection. - -Here is the list of every metrics we currently expose: - -* http.server.request.count -* http.server.request.duration -* http.server.response.valid.count -* http.server.response.errors.count -* http.server.errors.timeout -* http.server.errors.ratelimit -* grpc.server.request.count -* grpc.server.request.duration -* grpc.server.response.valid.count -* grpc.server.response.errors.count -* grpc.server.errors.timeout -* grpc.server.errors.ratelimit diff --git a/docs/monitoring/monitoring-metricbeat.asciidoc b/docs/monitoring/monitoring-metricbeat.asciidoc deleted file mode 100644 index 1f6b15a9403..00000000000 --- a/docs/monitoring/monitoring-metricbeat.asciidoc +++ /dev/null @@ -1,288 +0,0 @@ -[role="xpack"] -[[monitoring-metricbeat-collection]] -== Use {metricbeat} to send monitoring data -[subs="attributes"] -++++ -Use {metricbeat} collection -++++ - -In 7.3 and later, you can use {metricbeat} to collect data about {beatname_uc} -and ship it to the monitoring cluster. The benefit of using {metricbeat} instead -of internal collection is that the monitoring agent remains active even if the -{beatname_uc} instance dies. - -ifeval::["{beatname_lc}"=="metricbeat"] -Because you'll be using {metricbeat} to _monitor_ {beatname_uc}, you'll need to -run two instances of {beatname_uc}: a main instance that collects metrics from -the system and services running on the server, and a second instance that -collects metrics from {beatname_uc} only. Using a separate instance as a -monitoring agent allows you to send monitoring data to a dedicated monitoring -cluster. If the main agent goes down, the monitoring agent remains active. - -If you're running {beatname_uc} as a service, this approach requires extra work -because you need to run two instances of the same installed service -concurrently. If you don't want to run two instances concurrently, use -<> instead of using -{metricbeat}. -endif::[] - -//Commenting out this link temporarily until the general monitoring docs can be -//updated. -//To learn about monitoring in general, see -//{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster]. - -//NOTE: The tagged regions are re-used in the Stack Overview. - -To collect and ship monitoring data: - -. <> - -. <> - -[float] -[[configure-shipper]] -=== Configure the shipper you want to monitor - -. Enable the HTTP endpoint to allow external collection of monitoring data: -+ --- -// tag::enable-http-endpoint[] -Add the following setting in the {beatname_uc} configuration file -(+{beatname_lc}.yml+): - -[source,yaml] ----------------------------------- -http.enabled: true ----------------------------------- - -By default, metrics are exposed on port 5066. If you need to monitor multiple -{beats} shippers running on the same server, set `http.port` to expose metrics -for each shipper on a different port number: - -[source,yaml] ----------------------------------- -http.port: 5067 ----------------------------------- -// end::enable-http-endpoint[] --- - -. Disable the default collection of {beatname_uc} monitoring metrics. + -+ --- -// tag::disable-beat-collection[] -Add the following setting in the {beatname_uc} configuration file -(+{beatname_lc}.yml+): - -[source,yaml] ----------------------------------- -monitoring.enabled: false ----------------------------------- -// end::disable-beat-collection[] - -For more information, see -<>. --- - -. Configure host (optional). + -+ --- -// tag::set-http-host[] -If you intend to get metrics using {metricbeat} installed on another server, you need to bind the {beatname_uc} to host's IP: - -[source,yaml] ----------------------------------- -http.host: xxx.xxx.xxx.xxx ----------------------------------- -// end::set-http-host[] --- - -. Configure cluster UUID (optional). + -+ --- -// tag::set-cluster-uuid[] -To see the {beats} monitoring section in {kib} if you have a cluster, you need to associate the {beatname_uc} with cluster UUID: - -[source,yaml] ----------------------------------- -monitoring.cluster_uuid: "cluster-uuid" ----------------------------------- -// end::set-cluster-uuid[] --- - -ifndef::serverless[] -. Start {beatname_uc}. -endif::[] - -[float] -[[configure-metricbeat]] -=== Install and configure {metricbeat} to collect monitoring data - -ifeval::["{beatname_lc}"!="metricbeat"] -. Install {metricbeat} on the same server as {beatname_uc}. To learn how, see -{metricbeat-ref}/metricbeat-installation-configuration.html[Get started with {metricbeat}]. -If you already have {metricbeat} installed on the server, skip this step. -endif::[] -ifeval::["{beatname_lc}"=="metricbeat"] -. The next step depends on how you want to run {metricbeat}: -* If you're running as a service and want to run a separate monitoring instance, -take the steps required for your environment to run two instances of -{metricbeat} as a service. The steps for doing this vary by platform and are -beyond the scope of this documentation. -* If you're running the binary directly in the foreground and want to run a -separate monitoring instance, install {metricbeat} to a different path. If -necessary, set `path.config`, `path.data`, and `path.log` to point to the -correct directories. See <> for the default locations. -endif::[] - -. Enable the `beat-xpack` module in {metricbeat}. + -+ --- -// tag::enable-beat-module[] -For example, to enable the default configuration in the `modules.d` directory, -run the following command, using the correct command syntax for your OS: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -metricbeat modules enable beat-xpack ----------------------------------------------------------------------- - -For more information, see -{metricbeat-ref}/configuration-metricbeat.html[Configure modules] and -{metricbeat-ref}/metricbeat-module-beat.html[beat module]. -// end::enable-beat-module[] --- - -. Configure the `beat-xpack` module in {metricbeat}. + -+ --- -// tag::configure-beat-module[] -The `modules.d/beat-xpack.yml` file contains the following settings: - -[source,yaml] ----------------------------------- -- module: beat - metricsets: - - stats - - state - period: 10s - hosts: ["http://localhost:5066"] - #username: "user" - #password: "secret" - xpack.enabled: true ----------------------------------- - -Set the `hosts`, `username`, and `password` settings as required by your -environment. For other module settings, it's recommended that you accept the -defaults. - -By default, the module collects {beatname_uc} monitoring data from -`localhost:5066`. If you exposed the metrics on a different host or port when -you enabled the HTTP endpoint, update the `hosts` setting. - -To monitor multiple -ifndef::apm-server[] -{beats} agents, -endif::[] -ifdef::apm-server[] -APM Server instances, -endif::[] -specify a list of hosts, for example: - -[source,yaml] ----------------------------------- -hosts: ["http://localhost:5066","http://localhost:5067","http://localhost:5068"] ----------------------------------- - -If you configured {beatname_uc} to use encrypted communications, you must access -it via HTTPS. For example, use a `hosts` setting like `https://localhost:5066`. -// end::configure-beat-module[] - -// tag::remote-monitoring-user[] -If the Elastic {security-features} are enabled, you must also provide a user -ID and password so that {metricbeat} can collect metrics successfully: - -.. Create a user on the {es} cluster that has the -`remote_monitoring_collector` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. - -.. Add the `username` and `password` settings to the beat module configuration -file. -// end::remote-monitoring-user[] --- - -. Optional: Disable the system module in the {metricbeat}. -+ --- -// tag::disable-system-module[] -By default, the {metricbeat-ref}/metricbeat-module-system.html[system module] is -enabled. The information it collects, however, is not shown on the -*{stack-monitor-app}* page in {kib}. Unless you want to use that information for -other purposes, run the following command: - -["source","sh",subs="attributes,callouts"] ----------------------------------------------------------------------- -metricbeat modules disable system ----------------------------------------------------------------------- -// end::disable-system-module[] --- - -. Identify where to send the monitoring data. + -+ --- -TIP: In production environments, we strongly recommend using a separate cluster -(referred to as the _monitoring cluster_) to store the data. Using a separate -monitoring cluster prevents production cluster outages from impacting your -ability to access your monitoring data. It also prevents monitoring activities -from impacting the performance of your production cluster. - -For example, specify the {es} output information in the {metricbeat} -configuration file (`metricbeat.yml`): - -[source,yaml] ----------------------------------- -output.elasticsearch: - # Array of hosts to connect to. - hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1> - - # Optional protocol and basic auth credentials. - #protocol: "https" - #api_key: "id:api_key" <2> - #username: "elastic" - #password: "changeme" ----------------------------------- -<1> In this example, the data is stored on a monitoring cluster with nodes -`es-mon-1` and `es-mon-2`. -<2> Specify one of `api_key` or `username`/`password`. - -If you configured the monitoring cluster to use encrypted communications, you -must access it via HTTPS. For example, use a `hosts` setting like -`https://es-mon-1:9200`. - -IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the -cluster that stores the monitoring data must have at least one ingest node. - -If the {es} {security-features} are enabled on the monitoring cluster, you -must provide a valid user ID and password so that {metricbeat} can send metrics -successfully: - -.. Create a user on the monitoring cluster that has the -`remote_monitoring_agent` {ref}/built-in-roles.html[built-in role]. -Alternatively, if it's available in your environment, use the -`remote_monitoring_user` {ref}/built-in-users.html[built-in user]. -+ -TIP: If you're using {ilm}, the remote monitoring user -requires additional privileges to create and read indices. For more -information, see <>. - -.. Add the `username` and `password` settings to the {es} output information in -the {metricbeat} configuration file. - -For more information about these configuration options, see -{metricbeat-ref}/elasticsearch-output.html[Configure the {es} output]. --- - -. {metricbeat-ref}/metricbeat-starting.html[Start {metricbeat}] to begin -collecting monitoring data. - -. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}]. diff --git a/docs/monitoring/shared-monitor-config.asciidoc b/docs/monitoring/shared-monitor-config.asciidoc deleted file mode 100644 index 71825450dc3..00000000000 --- a/docs/monitoring/shared-monitor-config.asciidoc +++ /dev/null @@ -1,147 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/monitoring/shared-monitor-config.asciidoc[] -//// Make sure this content appears below a level 2 heading. -////////////////////////////////////////////////////////////////////////// - -[float] -[[configuration-monitor]] -=== Settings for internal collection - -Use the following settings to configure internal collection when you are not -using {metricbeat} to collect monitoring data. - -You specify these settings in the X-Pack monitoring section of the -+{beatname_lc}.yml+ config file: - -[float] -==== `monitoring.enabled` - -The `monitoring.enabled` config is a boolean setting to enable or disable {monitoring}. -If set to `true`, monitoring is enabled. - -The default value is `false`. - -[float] -==== `monitoring.elasticsearch` - -The {es} instances that you want to ship your {beatname_uc} metrics to. This -configuration option contains the following fields: - -[float] -===== `api_key` - -The detail of the API key to be used to send monitoring information to {es}. -See <> for more information. - -[float] -===== `bulk_max_size` - -The maximum number of metrics to bulk in a single {es} bulk API index request. -The default is `50`. For more information, see <>. - -[float] -===== `backoff.init` - -The number of seconds to wait before trying to reconnect to {es} after -a network error. After waiting `backoff.init` seconds, {beatname_uc} tries to -reconnect. If the attempt fails, the backoff timer is increased exponentially up -to `backoff.max`. After a successful connection, the backoff timer is reset. The -default is `1s`. - -[float] -===== `backoff.max` - -The maximum number of seconds to wait before attempting to connect to -{es} after a network error. The default is `60s`. - -[float] -===== `compression_level` - -The gzip compression level. Setting this value to `0` disables compression. The -compression level must be in the range of `1` (best speed) to `9` (best -compression). The default value is `0`. Increasing the compression level -reduces the network usage but increases the CPU usage. - -[float] -===== `headers` - -Custom HTTP headers to add to each request. For more information, see -<>. - -[float] -===== `hosts` - -The list of {es} nodes to connect to. Monitoring metrics are distributed to -these nodes in round robin order. For more information, see -<>. - -[float] -===== `max_retries` - -The number of times to retry sending the monitoring metrics after a failure. -After the specified number of retries, the metrics are typically dropped. The -default value is `3`. For more information, see <>. - -[float] -===== `parameters` - -Dictionary of HTTP parameters to pass within the URL with index operations. - -[float] -===== `password` - -The password that {beatname_uc} uses to authenticate with the {es} instances for -shipping monitoring data. - -[float] -===== `metrics.period` - -The time interval (in seconds) when metrics are sent to the {es} cluster. A new -snapshot of {beatname_uc} metrics is generated and scheduled for publishing each -period. The default value is 10 * time.Second. - -[float] -===== `state.period` - -The time interval (in seconds) when state information are sent to the {es} cluster. A new -snapshot of {beatname_uc} state is generated and scheduled for publishing each -period. The default value is 60 * time.Second. - -[float] -===== `protocol` - -The name of the protocol to use when connecting to the {es} cluster. The options -are: `http` or `https`. The default is `http`. If you specify a URL for `hosts`, -however, the value of protocol is overridden by the scheme you specify in the URL. - -[float] -===== `proxy_url` - -The URL of the proxy to use when connecting to the {es} cluster. For more -information, see <>. - -[float] -===== `timeout` - -The HTTP request timeout in seconds for the {es} request. The default is `90`. - -[float] -===== `ssl` - -Configuration options for Transport Layer Security (TLS) or Secure Sockets Layer -(SSL) parameters like the certificate authority (CA) to use for HTTPS-based -connections. If the `ssl` section is missing, the host CAs are used for -HTTPS connections to {es}. For more information, see <>. - -[float] -===== `username` - -The user ID that {beatname_uc} uses to authenticate with the {es} instances for -shipping monitoring data. diff --git a/docs/open-telemetry.asciidoc b/docs/open-telemetry.asciidoc deleted file mode 100644 index e8a613d9c10..00000000000 --- a/docs/open-telemetry.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[open-telemetry]] -== OpenTelemetry integration - -:ot-what: https://opentelemetry.io/docs/concepts/what-is-opentelemetry/ -:ot-spec: https://github.com/open-telemetry/opentelemetry-specification/blob/master/README.md -:ot-grpc: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlpgrpc -:ot-http: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#otlphttp -:ot-contrib: https://github.com/open-telemetry/opentelemetry-collector-contrib -:ot-resource: {ot-contrib}/tree/main/processor/resourceprocessor -:ot-attr: {ot-contrib}/blob/main/processor/attributesprocessor -:ot-repo: https://github.com/open-telemetry/opentelemetry-collector -:ot-pipelines: https://opentelemetry.io/docs/collector/configuration/#service -:ot-extension: {ot-repo}/blob/master/extension/README.md -:ot-scaling: {ot-repo}/blob/master/docs/performance.md - -:ot-collector: https://opentelemetry.io/docs/collector/getting-started/ -:ot-dockerhub: https://hub.docker.com/r/otel/opentelemetry-collector-contrib - -{ot-what}[OpenTelemetry] is a set of APIs, SDKs, tooling, and integrations that enable the capture and management of -telemetry data from your services and applications. For more information about the -OpenTelemetry project, see the {ot-spec}[spec]. - -[float] -== OpenTelemetry and the {stack} - -[subs=attributes+] -include::./diagrams/apm-otel-architecture.asciidoc[Architecture of Elastic APM with OpenTelemetry] - -Elastic integrates with OpenTelemetry, allowing you to reuse your existing instrumentation -to easily send observability data to the {stack}. -There are four ways to integrate OpenTelemetry with the {stack}: - -**OpenTelemetry API/SDK with Elastic APM agents** - -To unlock the full power of the {stack}, use the OpenTelemetry API/SDKs with Elastic APM agents, -currently supported by the Java, Python, .NET, and Node.js agents. -These Elastic APM agents translate OpenTelemetry API calls to Elastic APM API calls. -This allows you to reuse your existing instrumentation to create Elastic APM transactions and spans-- -avoiding vendor lock-in and having to redo manual instrumentation. - -<>. - -**OpenTelemetry agent** - -The {stack} natively supports the OpenTelemetry protocol (OTLP). -This means trace data and metrics collected from your applications and infrastructure by an -OpenTelemetry agent can be sent directly to the {stack}. - -<>. - -**OpenTelemetry collector** - -The {stack} natively supports the OpenTelemetry protocol (OTLP). -This means trace data and metrics collected from your applications and infrastructure by an -OpenTelemetry collector can be sent directly to the {stack}. - -<>. - -**Lambda collector exporter** - -AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {observability}. - -<>. - -include::./otel-with-elastic.asciidoc[] - -include::./otel-direct.asciidoc[] - -include::./otel-other.asciidoc[] - -include::./otel-metrics.asciidoc[] - -include::./otel-limitations.asciidoc[] - -include::./otel-attrs.asciidoc[] - -// **** -// The text below is used in the Quick start guide -// tag::otel-get-started[] -Elastic integrates with OpenTelemetry, allowing you to reuse your existing instrumentation -to easily send observability data to the {stack}. - -For more information on how to combine Elastic and OpenTelemetry, -see {apm-guide-ref}/open-telemetry.html[OpenTelemetry integration]. -// end::otel-get-started[] -// **** \ No newline at end of file diff --git a/docs/otel-attrs.asciidoc b/docs/otel-attrs.asciidoc deleted file mode 100644 index 60e4803d6d1..00000000000 --- a/docs/otel-attrs.asciidoc +++ /dev/null @@ -1,42 +0,0 @@ -[[open-telemetry-resource-attributes]] -=== Resource attributes - -A resource attribute is a key/value pair containing information about the entity producing telemetry. -Resource attributes are mapped to Elastic Common Schema (ECS) fields like `service.*`, `cloud.*`, `process.*`, etc. -These fields describe the service and the environment that the service runs in. - -The examples below set the Elastic (ECS) `service.environment` field for the resource, i.e. service, that is producing trace events. -Note that Elastic maps the OpenTelemetry `deployment.environment` field to -the ECS `service.environment` field on ingestion. - -**OpenTelemetry agent** - -Use the `OTEL_RESOURCE_ATTRIBUTES` environment variable to pass resource attributes at process invocation. - -[source,bash] ----- -export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production ----- - -**OpenTelemetry collector** - -Use the {ot-resource}[resource processor] to set or apply changes to resource attributes. - -[source,yaml] ----- -... -processors: - resource: - attributes: - - key: deployment.environment - action: insert - value: production -... ----- - -[TIP] --- -Need to add event attributes instead? -Use attributes--not to be confused with resource attributes--to add data to span, log, or metric events. -Attributes can be added as a part of the OpenTelemetry instrumentation process or with the {ot-attr}[attributes processor]. --- \ No newline at end of file diff --git a/docs/otel-direct.asciidoc b/docs/otel-direct.asciidoc deleted file mode 100644 index bfc77cf1a1e..00000000000 --- a/docs/otel-direct.asciidoc +++ /dev/null @@ -1,148 +0,0 @@ -[[open-telemetry-direct]] -=== OpenTelemetry native support - -++++ -OpenTelemetry native support -++++ - -The {stack} natively supports the OpenTelemetry protocol (OTLP). -This means trace data and metrics collected from your applications and infrastructure can -be sent directly to the {stack}. - -* Send data to the {stack} from an <> -* Send data to the {stack} from an <> - -[float] -[[connect-open-telemetry-collector]] -==== Send data from an OpenTelemetry collector - -Connect your OpenTelemetry collector instances to Elastic {observability} using the OTLP exporter: - -[source,yaml] ----- -receivers: <1> - # ... - otlp: - -processors: <2> - # ... - memory_limiter: - check_interval: 1s - limit_mib: 2000 - batch: - -exporters: - logging: - loglevel: warn <3> - otlp/elastic: <4> - # Elastic APM server https endpoint without the "https://" prefix - endpoint: "${ELASTIC_APM_SERVER_ENDPOINT}" <5> <7> - headers: - # Elastic APM Server secret token - Authorization: "Bearer ${ELASTIC_APM_SECRET_TOKEN}" <6> <7> - -service: - pipelines: - traces: - receivers: [otlp] - exporters: [logging, otlp/elastic] - metrics: - receivers: [otlp] - exporters: [logging, otlp/elastic] - logs: <8> - receivers: [otlp] - exporters: [logging, otlp/elastic] ----- -<1> The receivers, like the -https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver[OTLP receiver], that forward data emitted by APM agents, or the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver[host metrics receiver]. -<2> We recommend using the https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md[Batch processor] and the https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md[memory limiter processor]. For more information, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors[recommended processors]. -<3> The https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/loggingexporter[logging exporter] is helpful for troubleshooting and supports various logging levels, like `debug`, `info`, `warn`, and `error`. -<4> Elastic {observability} endpoint configuration. -APM Server supports a ProtoBuf payload via both the OTLP protocol over gRPC transport {ot-grpc}[(OTLP/gRPC)] -and the OTLP protocol over HTTP transport {ot-http}[(OTLP/HTTP)]. -To learn more about these exporters, see the OpenTelemetry Collector documentation: -https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter[OTLP/HTTP Exporter] or -https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter[OTLP/gRPC exporter]. -<5> Hostname and port of the APM Server endpoint. For example, `elastic-apm-server:8200`. -<6> Credential for Elastic APM <> (`Authorization: "Bearer a_secret_token"`) or <> (`Authorization: "ApiKey an_api_key"`). -<7> Environment-specific configuration parameters can be conveniently passed in as environment variables documented https://opentelemetry.io/docs/collector/configuration/#configuration-environment-variables[here] (e.g. `ELASTIC_APM_SERVER_ENDPOINT` and `ELASTIC_APM_SECRET_TOKEN`). -<8> preview:[] To send OpenTelemetry logs to {stack} version 8.0+, declare a `logs` pipeline. - -You're now ready to export traces and metrics from your services and applications. - -TIP: When using the OpenTelemetry collector, you should always prefer sending data via the [`OTLP` exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) to an Elastic APM Server. -Other methods, like using the [`elasticsearch` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter) to send data directly to {es} will send data to the {stack}, -but will bypass all of the validation and data processing that the APM Server performs. -In addition, your data will not be viewable in the {kib} {observability} apps if you use the `elasticsearch` exporter. - -[float] -[[instrument-apps-otel]] -==== Send data from an OpenTelemetry agent - -To export traces and metrics to APM Server, instrument your services and applications -with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app with the -https://github.com/open-telemetry/opentelemetry-java-instrumentation[OpenTelemetry agent for Java]. -See the https://opentelemetry.io/docs/instrumentation/[OpenTelemetry Instrumentation guides] to download the -OpenTelemetry Agent or SDK for your language. - -Define environment variables to configure the OpenTelemetry agent and enable communication with Elastic APM. -For example, if you are instrumenting a Java app, define the following environment variables: - -[source,bash] ----- -export OTEL_RESOURCE_ATTRIBUTES=service.name=checkoutService,service.version=1.1,deployment.environment=production -export OTEL_EXPORTER_OTLP_ENDPOINT=https://apm_server_url:8200 -export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer an_apm_secret_token" -export OTEL_METRICS_EXPORTER="otlp" \ -export OTEL_LOGS_EXPORTER="otlp" \ <1> -java -javaagent:/path/to/opentelemetry-javaagent-all.jar \ - -classpath lib/*:classes/ \ - com.mycompany.checkout.CheckoutServiceServer ----- -<1> preview:[] The OpenTelemetry logs intake via APM Server is currently in technical preview. - -|=== - -| `OTEL_RESOURCE_ATTRIBUTES` | Fields that describe the service and the environment that the service runs in. See -<> for more information. - -| `OTEL_EXPORTER_OTLP_ENDPOINT` | APM Server URL. The host and port that APM Server listens for events on. - -| `OTEL_EXPORTER_OTLP_HEADERS` | Authorization header that includes the Elastic APM Secret token or API key: `"Authorization=Bearer an_apm_secret_token"` or `"Authorization=ApiKey an_api_key"`. - -For information on how to format an API key, see -{apm-guide-ref}/api-key.html[API keys]. - -Please note the required space between `Bearer` and `an_apm_secret_token`, and `APIKey` and `an_api_key`. - -| `OTEL_METRICS_EXPORTER` | Metrics exporter to use. See https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection[exporter selection] for more information. - -| `OTEL_LOGS_EXPORTER` | Logs exporter to use. See https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#exporter-selection[exporter selection] for more information. - -|=== - -You are now ready to collect traces and <> before <> -and <> in {kib}. - -[float] -[[open-telemetry-proxy-apm]] -==== Proxy requests to APM Server - -APM Server supports both the {ot-grpc}[(OTLP/gRPC)] and {ot-http}[(OTLP/HTTP)] protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to the APM Server. - -If you use the OTLP/gRPC protocol, requests to the APM Server must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: `"Content-Type: application/grpc"`. - -When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to the APM Server follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you'd select the gRPC protocol when the `"Content-Type: application/grpc"` header exists on a request. - -For more information on how to configure an AWS ALB to support gRPC, see this AWS blog post: -https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/[Application Load Balancer Support for End-to-End HTTP/2 and gRPC]. - -For more information on how APM Server services gRPC requests, see -https://github.com/elastic/apm-server/blob/main/dev_docs/otel.md#muxing-grpc-and-http11[Muxing gRPC and HTTP/1.1]. - -[float] -[[open-telemetry-direct-next]] -==== Next steps - -* <> -* Learn about the <> diff --git a/docs/otel-limitations.asciidoc b/docs/otel-limitations.asciidoc deleted file mode 100644 index 28d2ebeb959..00000000000 --- a/docs/otel-limitations.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -[[open-telemetry-known-limitations]] -=== Limitations - -[float] -[[open-telemetry-traces-limitations]] -==== OpenTelemetry traces - -* Traces of applications using `messaging` semantics might be wrongly displayed as `transactions` in the APM UI, while they should be considered `spans` (see issue https://github.com/elastic/apm-server/issues/7001[#7001]). -* Inability to see Stack traces in spans. -* Inability in APM views to view the "Time Spent by Span Type" (see issue https://github.com/elastic/apm-server/issues/5747[#5747]). - -[float] -[[open-telemetry-metrics-limitations]] -==== OpenTelemetry metrics - -* Inability to see host metrics in Elastic Metrics Infrastructure view when using the OpenTelemetry Collector host metrics receiver (see issue https://github.com/elastic/apm-server/issues/5310[#5310]). - -[float] -[[open-telemetry-logs-intake]] -==== OpenTelemetry logs - -* preview:[] The OpenTelemetry logs intake via APM Server is in technical preview. -* The application logs data stream (`app_logs`) has dynamic mapping disabled. This means the automatic detection and mapping of new fields is disabled (see issue https://github.com/elastic/apm-server/issues/9093[#9093]). - -[float] -[[open-telemetry-otlp-limitations]] -==== OpenTelemetry Line Protocol (OTLP) - -APM Server supports both the {ot-grpc}[(OTLP/gRPC)] and {ot-http}[(OTLP/HTTP)] protocol with ProtoBuf payload. -APM Server does not yet support JSON Encoding for OTLP/HTTP. - -[float] -[[open-telemetry-collector-exporter]] -==== OpenTelemetry Collector exporter for Elastic - -The https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.57.2/exporter/elasticexporter[OpenTelemetry Collector exporter for Elastic] -has been deprecated and replaced by the native support of the OpenTelemetry Line Protocol in Elastic Observability (OTLP). -// To learn more, see https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.57.2/exporter/elasticsearchexporter#migration[migration]. - -The https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter#elasticsearch-exporter[Elasticsearch exporter for the OpenTelemetry Collector] -(which is different from the legacy exporter mentioned above) is not intended to be used with Elastic APM and Elastic Observability. Use <> instead. - -[float] -[[open-telemetry-tbs]] -==== OpenTelemetry's tail-based sampling - -Tail-based sampling allows to make sampling decisions after all spans of a trace have been completed. -This allows for more powerful and informed sampling rules. - -When using OpenTelemetry with Elastic APM, there are two different implementations available for tail-based sampling: - -* Tail-based sampling using the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor[tailsamplingprocessor] in the OpenTelemetry Collector -* Native <> - -Using the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor[tailsamplingprocessor] in the OpenTelemetry Collector comes with an important limitation. Elastic's APM backend calculates span and transaction metrics based on the incoming span events. -These metrics are accurate for 100% sampling scenarios. In scenarios with probabilistic sampling, Elastic's APM backend is being informed about the sampling rate of spans and can extrapolate throughput metrics based on the incoming, partial data. However, with tail-based sampling there's no clear probability for sampling decisions as the rules can be more complex and the OpenTelemetry Collector does not provide sampling probability information to the Elastic backend that could be used for extrapolation of data. Therefore, there's no way for Elastic APM to properly extrapolate throughput and count metrics that are derived from span events that have been tail-based sampled in the OpenTelemetry Collector. In these scenarios, derived throughput and count metrics are likely to be inaccurate. - -Therefore, we recommend using Elastic's native tail-based smapling when integrating with OpenTelemetry. diff --git a/docs/otel-metrics.asciidoc b/docs/otel-metrics.asciidoc deleted file mode 100644 index 27fad8d0703..00000000000 --- a/docs/otel-metrics.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[[open-telemetry-collect-metrics]] -=== Collect metrics - -IMPORTANT: When collecting metrics, please note that the https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/DoubleValueRecorder.html[`DoubleValueRecorder`] -and https://www.javadoc.io/doc/io.opentelemetry/opentelemetry-api/latest/io/opentelemetry/api/metrics/LongValueObserver.html[`LongValueRecorder`] metrics are not yet supported. - -Here's an example of how to capture business metrics from a Java application. - -[source,java] ----- -// initialize metric -Meter meter = GlobalMetricsProvider.getMeter("my-frontend"); -DoubleCounter orderValueCounter = meter.doubleCounterBuilder("order_value").build(); - -public void createOrder(HttpServletRequest request) { - - // create order in the database - ... - // increment business metrics for monitoring - orderValueCounter.add(orderPrice); -} ----- - -See the https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md[Open Telemetry Metrics API] -for more information. - -[float] -[[open-telemetry-verify-metrics]] -==== Verify OpenTelemetry metrics data - -Use *Discover* to validate that metrics are successfully reported to {kib}. - -. Launch {kib}: -+ --- -include::{tab-widget-dir}/open-kibana-widget.asciidoc[] --- - -. Open the main menu, then click *Discover*. -. Select `apm-*` as your index pattern. -. Filter the data to only show documents with metrics: `[data_stream][type]: "metrics"` -. Narrow your search with a known OpenTelemetry field. For example, if you have an `order_value` field, add `order_value: *` to your search to return -only OpenTelemetry metrics documents. - -[float] -[[open-telemetry-visualize]] -==== Visualize in {kib} - -Use *Lens* to create visualizations for OpenTelemetry metrics. Lens enables you to build visualizations by dragging and dropping data fields. It makes smart visualization suggestions for your data, allowing you to switch between visualization types. - -For more information on using Lens, refer to the {kibana-ref}/lens.html[Lens documentation]. diff --git a/docs/otel-other.asciidoc b/docs/otel-other.asciidoc deleted file mode 100644 index f896b832154..00000000000 --- a/docs/otel-other.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[open-telemetry-other-env]] -=== AWS Lambda Support - -[[open-telemetry-aws-lambda]] -AWS Lambda functions can be instrumented with OpenTelemetry and monitored with Elastic {observability}. - -To get started, follow the official AWS Distro for OpenTelemetry Lambda https://aws-otel.github.io/docs/getting-started/lambda[getting started documentation] and configure the OpenTelemetry Collector to output traces and metrics to your Elastic cluster. - -[float] -[[open-telemetry-lambda-next]] -==== Next steps - -* <> -* Add <> -* Learn about the <> diff --git a/docs/otel-with-elastic.asciidoc b/docs/otel-with-elastic.asciidoc deleted file mode 100644 index 81c599bd4e4..00000000000 --- a/docs/otel-with-elastic.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -[[open-telemetry-with-elastic]] -=== OpenTelemetry API/SDK with Elastic APM agents - -Use the OpenTelemetry API/SDKs with Elastic APM agents. -Supported Elastic APM agents translate OpenTelemetry API calls to Elastic APM API calls. -This allows you to reuse your existing instrumentation to create Elastic APM transactions and spans. - -TIP: If you'd like to use OpenTelemetry to send data directly to the APM server instead, -see <>. - -See the relevant Elastic APM agent documentation to get started: - -* {apm-java-ref}/opentelemetry-bridge.html[Java] -* {apm-dotnet-ref}/opentelemetry-bridge.html[.NET] -* {apm-node-ref}/opentelemetry-bridge.html[Node.js] -* {apm-py-ref}/opentelemetry-bridge.html[Python] - - -[float] -[[open-telemetry-elastic-next]] -==== Next steps - -* <> -* Add <> -* Learn about the <> \ No newline at end of file diff --git a/docs/processing-performance.asciidoc b/docs/processing-performance.asciidoc deleted file mode 100644 index 27e66e3afc6..00000000000 --- a/docs/processing-performance.asciidoc +++ /dev/null @@ -1,86 +0,0 @@ -[[processing-and-performance]] -=== Processing and performance - -APM Server performance depends on a number of factors: memory and CPU available, -network latency, transaction sizes, workload patterns, -agent and server settings, versions, and protocol. - -We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: - -* Using the default hardware template on AWS, GCP and Azure on {ecloud}. -* For each hardware template, testing with several sizes: 1 GB, 4 GB, 8 GB, and 32 GB. -* For each size, using a fixed number of APM agents: 10 agents for 1 GB, 30 agents for 4 GB, 60 agents for 8 GB, and 240 agents for 32 GB. -* In all scenarios, using medium sized events. Events include -<> and -<>. - -NOTE: You will also need to scale up {es} accordingly, potentially with an increased number of shards configured. -For more details on scaling {es}, refer to the {ref}/scalability.html[{es} documentation]. - -The results below include numbers for a synthetic workload. You can use the results of our tests to guide -your sizing decisions, however, *performance will vary based on factors unique to your use case* like your -specific setup, the size of APM event data, and the exact number of agents. - -:hardbreaks-option: - -[options="header"] -|==== -| Profile / Cloud | AWS | Azure | GCP - -| *1 GB* -(10 agents) -| 9,000 -events/second -| 6,000 -events/second -| 9,000 -events/second - -| *4 GB* -(30 agents) -| 25,000 -events/second -| 18,000 -events/second -| 17,000 -events/second - -| *8 GB* -(60 agents) -| 40,000 -events/second -| 26,000 -events/second -| 25,000 -events/second - -| *16 GB* -(120 agents) -| 72,000 -events/second -| 51,000 -events/second -| 45,000 -events/second - -| *32 GB* -(240 agents) -| 135,000 -events/second -| 95,000 -events/second -| 95,000 -events/second - -|==== - -:!hardbreaks-option: - -Don't forget that the APM Server is stateless. -Several instances running do not need to know about each other. -This means that with a properly sized {es} instance, APM Server scales out linearly. - -NOTE: RUM deserves special consideration. The RUM agent runs in browsers, and there can be many thousands reporting to an APM Server with very variable network latency. - -Alternatively or in addition to scaling the APM Server, consider -decreasing the ingestion volume. Read more in <>. diff --git a/docs/redirects.asciidoc b/docs/redirects.asciidoc deleted file mode 100644 index 6a3e847716e..00000000000 --- a/docs/redirects.asciidoc +++ /dev/null @@ -1,422 +0,0 @@ -["appendix",role="exclude",id="redirects"] -= Deleted pages - -The following pages have moved or been deleted. - -// Event Types - -[role="exclude",id="event-types"] -=== Event types - -This page has moved. Please see {apm-guide-ref}/data-model.html[APM data model]. - -// [role="exclude",id="errors"] -// === Errors - -// This page has moved. Please see {apm-overview-ref-v}/errors.html[Errors]. - -// [role="exclude",id="transactions"] -// === Transactions - -// This page has moved. Please see {apm-overview-ref-v}/transactions.html[Transactions]. - -// [role="exclude",id="transactions-spans"] -// === Spans - -// This page has moved. Please see {apm-overview-ref-v}/transaction-spans.html[Spans]. - -// Error API - -[role="exclude",id="error-endpoint"] -=== Error endpoint - -The error endpoint has been deprecated. Instead, see <>. - -[role="exclude",id="error-schema-definition"] -=== Error schema definition - -The error schema has moved. Please see <>. - -[role="exclude",id="error-api-examples"] -=== Error API examples - -The error API examples have moved. Please see <>. - -[role="exclude",id="error-payload-schema"] -=== Error payload schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-service-schema"] -=== Error service schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-system-schema"] -=== Error system schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-context-schema"] -=== Error context schema - -This schema has changed. Please see <>. - -[role="exclude",id="error-stacktraceframe-schema"] -=== Error stack trace frame schema - -This schema has changed. Please see <>. - -[role="exclude",id="payload-with-error"] -=== Payload with error - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-exception"] -=== Payload with minimal exception - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-log"] -=== Payload with minimal log - -This is no longer helpful. Please see <>. - -// Transaction API - -[role="exclude",id="transaction-endpoint"] -=== Transaction endpoint - -The transaction endpoint has been deprecated. Instead, see <>. - -[role="exclude",id="transaction-schema-definition"] -=== Transaction schema definition - -The transaction schema has moved. Please see <>. - -[role="exclude",id="transaction-api-examples"] -=== Transaction API examples - -The transaction API examples have moved. Please see <>. - -[role="exclude",id="transaction-span-schema"] -=== Transaction span schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-payload-schema"] -=== Transaction payload schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-service-schema"] -=== Transaction service schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-system-schema"] -=== Transaction system schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-context-schema"] -=== Transaction context schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-stacktraceframe-schema"] -=== Transaction stack trace frame schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-request-schema"] -=== Transaction request schema - -This schema has changed. Please see <>. - -[role="exclude",id="transaction-user-schema"] -=== Transaction user schema - -This schema has changed. Please see <>. - -[role="exclude",id="payload-with-transactions"] -=== Payload with transactions - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-transaction"] -=== Payload with minimal transaction - -This is no longer helpful. Please see <>. - -[role="exclude",id="payload-with-minimal-span"] -=== Payload with minimal span - -This is no longer helpful. Please see <>. - -[role="exclude",id="example-intakev2-events"] -=== Example Request Body - -This page has moved. Please see <>. - -// V1 intake API - -[role="exclude",id="request-too-large"] -=== HTTP 413: Request body too large - -This error can no longer occur. Please see <> for an updated overview of potential issues. - -[role="exclude",id="configuration-v1-api"] -=== Configuration options: v1 Intake API - -Intake API v1 is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="max_unzipped_size"] -=== `max_unzipped_size` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="concurrent_requests"] -=== `concurrent_requests` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="metrics.enabled"] -=== `metrics.enabled` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="max_request_queue_time"] -=== `max_request_queue_time` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="configuration-v2-api"] -=== Configuration options: v2 Intake API - -This section has moved. Please see <> for current configuration options. - -[role="exclude",id="configuration-rum-v1"] -=== `configuration-rum-v1` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="rate_limit_v1"] -=== `rate_limit_v1` - -This configuration option is no longer supported. Please see <> for current configuration options. - -[role="exclude",id="configuration-rum-v2"] -=== `configuration-rum-v2` - -This section has moved. Please see <> for current configuration options. - -[role="exclude",id="configuration-rum-general"] -=== Configuration options: general - -This section has moved. Please see <> for current configuration options. - -[role="exclude",id="use-v1-and-v2"] -=== Tuning APM Server using both v1 and v2 intake API - -This section has moved. Please see <> for how to tune APM Server. - -// Dashboards - -[role="exclude",id="load-dashboards-logstash"] -=== Tuning APM Server using both v1 and v2 intake API - -Loading dashboards from APM Server is no longer supported. Please see the {kibana-ref}/xpack-apm.html[{kib} APM UI] documentation. - -[role="exclude",id="url-option"] -=== setup.dashboards.url - -Loading dashboards from APM Server is no longer supported. Please see the {kibana-ref}/xpack-apm.html[{kib} APM UI] documentation. - -[role="exclude",id="file-option"] -=== setup.dashboards.file - -Loading dashboards from APM Server is no longer supported. Please see the {kibana-ref}/xpack-apm.html[{kib} APM UI] documentation. - -[role="exclude",id="load-kibana-dashboards"] -=== Dashboards - -Loading {kib} dashboards from APM Server is no longer supported. -Please use the {kibana-ref}/xpack-apm.html[{kib} APM UI] instead. -As an alternative, a small number of dashboards and visualizations are available in the -https://github.com/elastic/apm-contrib/tree/main/kibana[apm-contrib] repository. - -// [role="exclude",id="rum"] -// === Rum - -// This section has moved. Please see <>. - -[role="exclude",id="aws-lambda-arch"] -=== APM Architecture for AWS Lambda - -This section has moved. See {apm-lambda-ref}/aws-lambda-arch.html[APM Architecture for AWS Lambda]. - -[role="exclude",id="aws-lambda-config-options"] -=== Configuration options - -This section has moved. See {apm-lambda-ref}/aws-lambda-config-options.html[Configuration options]. - -[role="exclude",id="aws-lambda-secrets-manager"] -=== Using AWS Secrets Manager to manage APM authentication keys - -This section has moved. See {apm-lambda-ref}/aws-lambda-secrets-manager.html[Using AWS Secrets Manager to manage APM authentication keys]. - -[role="exclude",id="go-compatibility"] -=== Go Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="java-compatibility"] -=== Java Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="dotnet-compatibility"] -=== .NET Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="nodejs-compatibility"] -=== Node.js Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="python-compatibility"] -=== Python Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="ruby-compatibility"] -=== Ruby Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="rum-compatibility"] -=== RUM Agent Compatibility - -This page has moved. Please see <>. - -[role="exclude",id="apm-release-notes"] -=== APM release highlights - -This page has moved. -Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. - -Please see <>. - -[role="exclude",id="whats-new"] -=== What's new in APM {minor-version} - -This page has moved. -Please see {observability-guide}/whats-new.html[What's new in {observability} {minor-version}]. - -[role="exclude",id="troubleshooting"] -=== Troubleshooting - -This page has moved. -Please see <>. - -[role="exclude",id="input-apm"] -=== Configuring - -This page has moved. -Please see <>. - -[role="exclude",id="events-api"] -=== Events Intake API - -[discrete] -[[events-api-errors]] -==== Errors - -This page has been deleted. -Please see <>. - -[role="exclude",id="intake-api"] -=== API - -This page has been deleted. -Please see <>. - -[role="exclude",id="metadata-api"] -=== Metadata - -[discrete] -[[metadata-schema]] -==== Errors - -This page has been deleted. -Please see <>. - -[role="exclude",id="errors"] -=== Errors - -This page has been deleted. -Please see <>. - -[role="exclude",id="transaction-spans"] -=== Spans - -This page has been deleted. -Please see <>. - -[role="exclude",id="transactions"] -=== Transactions - -This page has been deleted. -Please see <>. - -[role="exclude",id="legacy-apm-overview"] -=== Legacy APM Overview - -This page has been deleted. -Please see <>. - -[role="exclude",id="apm-components"] -=== Components and documentation - -This page has been deleted. -Please see <>. - -[role="exclude",id="configuring-ingest-node"] -=== Parse data using ingest node pipelines - -This page has been deleted. -Please see <>. - -[role="exclude",id="overview"] -=== Legacy APM Server Reference - -This page has been deleted. -Please see <>. - -[role="exclude",id="metadata"] -=== Metadata - -This page has been deleted. -Please see <>. - -[role="exclude",id="distributed-tracing"] -=== Distributed tracing - -This page has been deleted. -Please see <>. - -[role="exclude",id="sourcemaps"] -=== How to apply source maps to error stack traces when using minified bundles - -[discrete] -[[sourcemap-rum-generate]] -==== Sourcemap RUM Generate - -[discrete] -[[sourcemap-rum-upload]] -==== Sourcemap RUM upload - -This page has been deleted. -Please see <>. diff --git a/docs/release-notes.asciidoc b/docs/release-notes.asciidoc deleted file mode 100644 index 95a488319fe..00000000000 --- a/docs/release-notes.asciidoc +++ /dev/null @@ -1,47 +0,0 @@ -:root-dir: ../ - -[[release-notes]] -= Release notes -:issue: https://github.com/elastic/apm-server/issues/ -:pull: https://github.com/elastic/apm-server/pull/ - -This section summarizes the changes in each release. - -**APM integration and APM Server** - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -Looking for a previous version? See the {apm-guide-7x}/release-notes.html[7.x release notes]. - -**APM app** - -See the {kibana-ref}/release-notes.html[Kibana release notes]. - -**APM agents** - -* {apm-go-ref-v}/release-notes.html[Go agent] -* {apm-ios-ref-v}/release-notes.html[iOS agent] -* {apm-java-ref-v}/release-notes.html[Java agent] -* {apm-dotnet-ref-v}/release-notes.html[.NET agent] -* {apm-node-ref-v}/release-notes.html[Node.js agent] -* {apm-php-ref-v}/release-notes.html[PHP agent] -* {apm-py-ref-v}/release-notes.html[Python agent] -* {apm-rum-ref-v}/release-notes.html[Real User Monitoring (RUM) agent] -* {apm-ruby-ref-v}/release-notes.html[Ruby agent] - -**APM extensions** - -* https://github.com/elastic/apm-aws-lambda/blob/main/CHANGELOG.asciidoc[Elastic APM AWS Lambda extension] - -include::{root-dir}/CHANGELOG.asciidoc[] diff --git a/docs/repositories.asciidoc b/docs/repositories.asciidoc deleted file mode 100644 index 5bf4676f9ab..00000000000 --- a/docs/repositories.asciidoc +++ /dev/null @@ -1,170 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/setup-repositories.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[setup-repositories]] -==== Repositories for APT and YUM - -We have repositories available for APT and YUM-based distributions. Note that we -provide binary packages, but no source packages. - -We use the PGP key https://pgp.mit.edu/pks/lookup?op=vindex&search=0xD27D666CD88E42B4[D88E42B4], -{es} Signing Key, with fingerprint - - 4609 5ACC 8548 582C 1A26 99A9 D27D 666C D88E 42B4 - -to sign all our packages. It is available from https://pgp.mit.edu. - -[float] -===== APT - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {repo} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -To add the {repo} repository for APT: - -. Download and install the Public Signing Key: -+ -[source,sh] --------------------------------------------------- -wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - --------------------------------------------------- - -. You may need to install the `apt-transport-https` package on Debian before proceeding: -+ -[source,sh] --------------------------------------------------- -sudo apt-get install apt-transport-https --------------------------------------------------- - -ifeval::["{release-state}"=="prerelease"] -. Save the repository definition to +/etc/apt/sources.list.d/elastic-{major-version}-prerelease.list+: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -echo "deb https://artifacts.elastic.co/packages/{major-version}-prerelease/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-{major-version}-prerelease.list --------------------------------------------------- -+ -endif::[] - -ifeval::["{release-state}"=="released"] -. Save the repository definition to +/etc/apt/sources.list.d/elastic-{major-version}.list+: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -echo "deb https://artifacts.elastic.co/packages/{major-version}/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-{major-version}.list --------------------------------------------------- -+ -endif::[] -[WARNING] -================================================== -To add the Elastic repository, make sure that you use the `echo` method shown -in the example. Do not use `add-apt-repository` because it will add a `deb-src` -entry, but we do not provide a source package. - -If you have added the `deb-src` entry by mistake, you will see an error like -the following: - -["source","txt",subs="attributes"] ----- -Unable to find expected entry 'main/source/Sources' in Release file (Wrong sources.list entry or malformed file) ----- - -Simply delete the `deb-src` entry from the `/etc/apt/sources.list` file, and the installation should work as expected. -================================================== - -. Run `apt-get update`, and the repository is ready for use. For example, you can -install {beatname_uc} by running: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo apt-get update && sudo apt-get install {beatname_pkg} --------------------------------------------------- - -. To configure {beatname_uc} to start automatically during boot, run: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo systemctl enable {beatname_pkg} --------------------------------------------------- - -endif::[] - -[float] -===== YUM - -ifeval::["{release-state}"=="unreleased"] - -Version {version} of {repo} has not yet been released. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -To add the {repo} repository for YUM: - -. Download and install the public signing key: -+ -[source,sh] --------------------------------------------------- -sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch --------------------------------------------------- - -. Create a file with a `.repo` extension (for example, `elastic.repo`) in -your `/etc/yum.repos.d/` directory and add the following lines: -+ -ifeval::["{release-state}"=="prerelease"] -["source","sh",subs="attributes"] --------------------------------------------------- -[elastic-{major-version}-prerelease] -name=Elastic repository for {major-version} prerelease packages -baseurl=https://artifacts.elastic.co/packages/{major-version}-prerelease/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md --------------------------------------------------- -endif::[] -ifeval::["{release-state}"=="released"] -["source","sh",subs="attributes"] --------------------------------------------------- -[elastic-{major-version}] -name=Elastic repository for {major-version} packages -baseurl=https://artifacts.elastic.co/packages/{major-version}/yum -gpgcheck=1 -gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch -enabled=1 -autorefresh=1 -type=rpm-md --------------------------------------------------- -endif::[] -+ -Your repository is ready to use. For example, you can install {beatname_uc} by -running: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo yum install {beatname_pkg} --------------------------------------------------- - -. To configure {beatname_uc} to start automatically during boot, run: -+ -["source","sh",subs="attributes"] --------------------------------------------------- -sudo systemctl enable {beatname_pkg} --------------------------------------------------- - -endif::[] diff --git a/docs/sampling.asciidoc b/docs/sampling.asciidoc deleted file mode 100644 index 6b9ae285a1e..00000000000 --- a/docs/sampling.asciidoc +++ /dev/null @@ -1,208 +0,0 @@ -[[sampling]] -=== Transaction sampling - -Distributed tracing can generate a substantial amount of data. -More data can mean higher costs and more noise. -Sampling aims to lower the amount of data ingested and the effort required to analyze that data -- -all while still making it easy to find anomalous patterns in your applications, detect outages, track errors, -and lower mean time to recovery (MTTR). - -Elastic APM supports two types of sampling: - -* <> -* <> - -[float] -[[head-based-sampling]] -==== Head-based sampling - -In head-based sampling, the sampling decision for each trace is made when the trace is initiated. -Each trace has a defined and equal probability of being sampled. - -For example, a sampling value of `.2` indicates a transaction sample rate of `20%`. -This means that only `20%` of traces will send and retain all of their associated information. -The remaining traces will drop contextual information to reduce the transfer and storage size of the trace. - -Head-based sampling is quick and easy to set up. -Its downside is that it's entirely random -- interesting -data might be discarded purely due to chance. - -See <> to get started. - -**Distributed tracing with head-based sampling** - -In a distributed trace, the sampling decision is still made when the trace is initiated. -Each subsequent service respects the initial service's sampling decision, regardless of its configured sample rate; -the result is a sampling percentage that matches the initiating service. - -In this example, `Service A` initiates four transactions and has sample rate of `.5` (`50%`). -The sample rates of `Service B` and `Service C` are ignored. - -image::./images/dt-sampling-example-1.png[Distributed tracing and head based sampling example one] - -In this example, `Service A` initiates four transactions and has a sample rate of `1` (`100%`). -Again, the sample rates of `Service B` and `Service C` are ignored. - -image::./images/dt-sampling-example-2.png[Distributed tracing and head based sampling example two] - -**OpenTelemetry with head-based sampling** - -Head-based sampling is implemented directly in the APM agents and SDKs. -The sample rate must be propagated between services and the managed intake service in order to produce accurate metrics. - -OpenTelemetry offers multiple samplers. However, most samplers do not propagate the sample rate. -This results in inaccurate span-based metrics, like APM throughput, latency, and error metrics. - -For accurate span-based metrics when using head-based sampling with OpenTelemetry, you must use -a [consistent probability sampler](https://opentelemetry.io/docs/specs/otel/trace/tracestate-probability-sampling/). -These samplers propagate the sample rate between services and the managed intake service, resulting in accurate metrics. - -NOTE: OpenTelemetry does not offer consistent probability samplers in all languanges. -OpenTelemetry users should consider using tail-based sampling instead. -+ -Refer to the documentation of your favorite OpenTelemetry agent or SDK for more information on the availability of consistent probability samplers. - -[float] -[[tail-based-sampling]] -==== Tail-based sampling - -In tail-based sampling, the sampling decision for each trace is made after the trace has completed. -This means all traces will be analyzed against a set of rules, or policies, which will determine the rate at which they are sampled. - -Unlike head-based sampling, each trace does not have an equal probability of being sampled. -Because slower traces are more interesting than faster ones, tail-based sampling uses weighted random sampling -- so -traces with a longer root transaction duration are more likely to be sampled than traces with a fast root transaction duration. - -A downside of tail-based sampling is that it results in more data being sent from APM agents to the APM Server. -The APM Server will therefore use more CPU, memory, and disk than with head-based sampling. -However, because the tail-based sampling decision happens in APM Server, there is less data to transfer from APM Server to {es}. -So running APM Server close to your instrumented services can reduce any increase in transfer costs that tail-based sampling brings. - -See <> to get started. - -**Distributed tracing with tail-based sampling** - -With tail-based sampling, all traces are observed and a sampling decision is only made once a trace completes. - -In this example, `Service A` initiates four transactions. -If our sample rate is `.5` (`50%`) for traces with a `success` outcome, -and `1` (`100%`) for traces with a `failure` outcome, -the sampled traces would look something like this: - -image::./images/dt-sampling-example-3.png[Distributed tracing and tail based sampling example one] - -**OpenTelemetry with tail-based sampling** - -Tail-based sampling is implemented entirely in APM Server, -and will work with traces sent by either Elastic APM agents or OpenTelemetry SDKs. - -[float] -=== Sampled data and visualizations - -A sampled trace retains all data associated with it. -A non-sampled trace drops all <> and <> data^1^. -Regardless of the sampling decision, all traces retain <> data. - -Some visualizations in the {apm-app}, like latency, are powered by aggregated transaction and span <>. -Metrics are based on sampled traces and weighted by the inverse sampling rate. -For example, if you sample at 5%, each trace is counted as 20. -As a result, as the variance of latency increases, or the sampling rate decreases, your level of error will increase. - -^1^ Real User Monitoring (RUM) traces are an exception to this rule. -The {kib} apps that utilize RUM data depend on transaction events, -so non-sampled RUM traces retain transaction data -- only span data is dropped. - -[float] -=== Sample rates - -What's the best sampling rate? Unfortunately, there isn't one. -Sampling is dependent on your data, the throughput of your application, data retention policies, and other factors. -A sampling rate from `.1%` to `100%` would all be considered normal. -You'll likely decide on a unique sample rate for different scenarios. -Here are some examples: - -* Services with considerably more traffic than others might be safe to sample at lower rates -* Routes that are more important than others might be sampled at higher rates -* A production service environment might warrant a higher sampling rate than a development environment -* Failed trace outcomes might be more interesting than successful traces -- thus requiring a higher sample rate - -Regardless of the above, cost conscious customers are likely to be fine with a lower sample rate. - -[[configure-head-based-sampling]] -==== Configure head-based sampling - -There are three ways to adjust the head-based sampling rate of your APM agents: - -===== Dynamic configuration - -The transaction sample rate can be changed dynamically (no redeployment necessary) on a per-service and per-environment -basis with {kibana-ref}/agent-configuration.html[{apm-agent} Configuration] in {kib}. - -===== {kib} API configuration - -{apm-agent} configuration exposes an API that can be used to programmatically change -your agents' sampling rate. -An example is provided in the {kibana-ref}/agent-config-api.html[Agent configuration API reference]. - -===== {apm-agent} configuration - -Each agent provides a configuration value used to set the transaction sample rate. -See the relevant agent's documentation for more details: - -* Go: {apm-go-ref-v}/configuration.html#config-transaction-sample-rate[`ELASTIC_APM_TRANSACTION_SAMPLE_RATE`] -* Java: {apm-java-ref-v}/config-core.html#config-transaction-sample-rate[`transaction_sample_rate`] -* .NET: {apm-dotnet-ref-v}/config-core.html#config-transaction-sample-rate[`TransactionSampleRate`] -* Node.js: {apm-node-ref-v}/configuration.html#transaction-sample-rate[`transactionSampleRate`] -* PHP: {apm-php-ref-v}/configuration-reference.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Python: {apm-py-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] -* Ruby: {apm-ruby-ref-v}/configuration.html#config-transaction-sample-rate[`transaction_sample_rate`] - -[[configure-tail-based-sampling]] -==== Configure tail-based sampling - -Enable tail-based sampling with <>. -When enabled, trace events are mapped to sampling policies. -Each sampling policy must specify a sample rate, and can optionally specify other conditions. -All of the policy conditions must be true for a trace event to match it. - -Trace events are matched to policies in the order specified. -Each policy list must conclude with a default policy -- one that only specifies a sample rate. -This default policy is used to catch remaining trace events that don't match a stricter policy. -Requiring this default policy ensures that traces are only dropped intentionally. -If you enable tail-based sampling and send a transaction that does not match any of the policies, -APM Server will reject the transaction with the error `no matching policy`. - -IMPORTANT: Please note that from version `8.3.1` APM Server implements a default storage limit of 3GB, -but, due to how the limit is calculated and enforced the actual disk space may still grow slightly -over the limit. - -===== Example configuration - -This example defines three tail-based sampling polices: - -[source, yml] ----- -- sample_rate: 1 <1> - service.environment: production - trace.name: "GET /very_important_route" -- sample_rate: .01 <2> - service.environment: production - trace.name: "GET /not_important_route" -- sample_rate: .1 <3> ----- -<1> Samples 100% of traces in `production` with the trace name `"GET /very_important_route"` -<2> Samples 1% of traces in `production` with the trace name `"GET /not_important_route"` -<3> Default policy to sample all remaining traces at 10%, e.g. traces in a different environment, like `dev`, -or traces with any other name - -===== Configuration reference - -**Top-level tail-based sampling settings:** - -:leveloffset: +3 -include::./configure/sampling.asciidoc[tag=tbs-top] - -**Policy settings:** - -include::./configure/sampling.asciidoc[tag=tbs-policy] -:leveloffset: -3 diff --git a/docs/secret-token.asciidoc b/docs/secret-token.asciidoc deleted file mode 100644 index 0e67cded06b..00000000000 --- a/docs/secret-token.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[[secret-token]] -=== Secret token - -IMPORTANT: Secret tokens are sent as plain-text, -so they only provide security when used in combination with <>. - -When defined, secret tokens are used to authorize requests to the APM Server. -Both the {apm-agent} and APM Server must be configured with the same secret token for the request to be accepted. - -To secure the communication between APM agents and the APM Server with a secret token: - -. Make sure <> is enabled -. <> -. <> - -NOTE: Secret tokens are not applicable for the RUM Agent, -as there is no way to prevent them from being publicly exposed. - -[float] -[[create-secret-token]] -=== Create a secret token - -// lint ignore fleet -NOTE: {ess} and {ece} deployments provision a secret token when the deployment is created. -The secret token can be found and reset in the {ecloud} console under **Deployments** -- **APM & Fleet**. - -include::{tab-widget-dir}/secret-token-widget.asciidoc[] - -[[configure-secret-token]] -[float] -=== Configure the secret token in your APM agents - -Each Elastic {apm-agent} has a configuration option to set the value of the secret token: - -* *Go agent*: {apm-go-ref}/configuration.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *iOS agent*: {apm-ios-ref-v}/configuration.html#secretToken[`secretToken`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-secret-token[`secret_token`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-secret-token[`ELASTIC_APM_SECRET_TOKEN`] -* *Node.js agent*: {apm-node-ref}/configuration.html#secret-token[`Secret Token`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-secret-token[`secret_token`] -* *Python agent*: {apm-py-ref}/configuration.html#config-secret-token[`secret_token`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-secret-token[`secret_token`] - -In addition to setting the secret token, ensure the configured server URL uses `HTTPS` instead of `HTTP`: - -* *Go agent*: {apm-go-ref}/configuration.html#config-server-url[`ELASTIC_APM_SERVER_URL`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-server-urls[`server_urls`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-url[`ServerUrl`] -* *Node.js agent*: {apm-node-ref}/configuration.html#server-url[`serverUrl`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-server-url[`server_url`] -* *Python agent*: {apm-py-ref}/[`server_url`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-server-url[`server_url`] \ No newline at end of file diff --git a/docs/secure-agent-communication.asciidoc b/docs/secure-agent-communication.asciidoc deleted file mode 100644 index e2fca3bee18..00000000000 --- a/docs/secure-agent-communication.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[[secure-agent-communication]] -== Secure communication with APM agents - -++++ -With APM agents -++++ - -Communication between APM agents and {agent} can be both encrypted and authenticated. -It is strongly recommended to use both TLS encryption and authentication as secrets are sent as plain text. - -* <> -* <> -* <> - -As soon as an authenticated communication is enabled, -requests without a valid token or API key will be denied. -If both API keys and a secret token are enabled, APM agents can choose whichever mechanism they support. - -In some use-cases, like when an {apm-agent} is running on the client side, -authentication is not possible. See <> for more information. - -include::./tls-comms.asciidoc[] - -include::./api-keys.asciidoc[] - -include::./secret-token.asciidoc[] - -include::./anonymous-auth.asciidoc[] diff --git a/docs/secure-comms.asciidoc b/docs/secure-comms.asciidoc deleted file mode 100644 index 968d25835f3..00000000000 --- a/docs/secure-comms.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[[securing-apm-server]] -== Secure communication with the {stack} - -++++ -Secure communication -++++ - -The following topics provide information about securing the APM Server -process and connecting securely to APM agents and the {stack}. - -* <> -* <> - -:leveloffset: +1 -include::secure-agent-communication.asciidoc[] - -// APM privileges -include::{docdir}/feature-roles.asciidoc[] - -// APM API keys -include::{docdir}/access-api-keys.asciidoc[] -:leveloffset: -1 \ No newline at end of file diff --git a/docs/setting-up-and-running.asciidoc b/docs/setting-up-and-running.asciidoc deleted file mode 100644 index 18c44af495f..00000000000 --- a/docs/setting-up-and-running.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ - -[[setting-up-and-running]] -== APM Server advanced setup - -++++ -Advanced setup -++++ - -Before reading this section, see the <> -for basic installation and running instructions. - -This section includes additional information on how to set up and run APM Server, including: - -* <> -* <> -* <> -* <> -* <> -* <> - -include::{docdir}/shared-directory-layout.asciidoc[] - -include::{docdir}/keystore.asciidoc[] - -include::{docdir}/command-reference.asciidoc[] - -include::{docdir}/data-ingestion.asciidoc[] - -include::{docdir}/high-availability.asciidoc[] - -include::{docdir}/shared-systemd.asciidoc[] diff --git a/docs/shared-directory-layout.asciidoc b/docs/shared-directory-layout.asciidoc deleted file mode 100644 index 36159068571..00000000000 --- a/docs/shared-directory-layout.asciidoc +++ /dev/null @@ -1,34 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/shared-directory-layout.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[[directory-layout]] -=== Installation layout - -View the installation layout and default paths for both Fleet-managed APM Server and the APM Server binary. - -[float] -=== Fleet-managed - -{agent} files are installed in the following locations. You cannot override -these installation paths because they are required for upgrades. - --- -include::{ingest-docs-root}/docs/en/ingest-management/tab-widgets/install-layout-widget.asciidoc[] --- - -[float] -=== APM Server binary - -APM Server uses the following default paths unless you explicitly change them. - --- -include::{tab-widget-dir}/directory-layout-widget.asciidoc[] --- \ No newline at end of file diff --git a/docs/shared-docker.asciidoc b/docs/shared-docker.asciidoc deleted file mode 100644 index 1d34e5dbd70..00000000000 --- a/docs/shared-docker.asciidoc +++ /dev/null @@ -1,334 +0,0 @@ -[[running-on-docker]] -==== Run {beatname_uc} on Docker - -Docker images for {beatname_uc} are available from the Elastic Docker -registry. The base image is https://hub.docker.com/_/ubuntu[ubuntu:20.04]. - -A list of all published Docker images and tags is available at -https://www.docker.elastic.co[www.docker.elastic.co]. - -These images are free to use under the Elastic license. They contain open source -and free commercial features and access to paid commercial features. -{kibana-ref}/managing-licenses.html[Start a 30-day trial] to try out all of the -paid commercial features. See the -https://www.elastic.co/subscriptions[Subscriptions] page for information about -Elastic license levels. - -[float] -===== Pull the image - -Obtaining {beatname_uc} for Docker is as simple as issuing a +docker pull+ command -against the Elastic Docker registry and then, optionally, verifying the image. - -ifeval::["{release-state}"=="unreleased"] - -However, version {version} of {beatname_uc} has not yet been -released, so no Docker image is currently available for this version. - -endif::[] - -ifeval::["{release-state}"!="unreleased"] - -. Pull the Docker image: -+ -["source", "sh", subs="attributes"] ------------------------------------------------- -docker pull {dockerimage} ------------------------------------------------- - -. Verify the Docker image: -+ -["source", "sh", subs="attributes"] ----- -wget https://artifacts.elastic.co/cosign.pub -cosign verify --key cosign.pub {dockerimage} ----- -+ -The `cosign` command prints the check results and the signature payload in JSON format: -+ -[source,sh,subs="attributes"] ----- -Verification for {dockerimage} -- -The following checks were performed on each of these signatures: - - The cosign claims were validated - - Existence of the claims in the transparency log was verified offline - - The signatures were verified against the specified public key ----- - -endif::[] - -ifndef::apm-server[] - -[float] -===== Run the {beatname_uc} setup - -Running {beatname_uc} with the setup command will create the index pattern and -load visualizations -ifndef::no_dashboards[] -, dashboards, -endif::no_dashboards[] -and {ml} jobs. Run this command: - -ifeval::["{beatname_lc}"=="filebeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="heartbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="journalbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ ---cap-add=NET_ADMIN \ -{dockerimage} \ -setup -E setup.kibana.host=kibana:5601 \ --E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="auditbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ - --cap-add="AUDIT_CONTROL" \ - --cap-add="AUDIT_READ" \ - {dockerimage} \ - setup -E setup.kibana.host=kibana:5601 \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -<1> Substitute your {kib} and {es} hosts and ports. -<2> If you are using the hosted {ess} in {ecloud}, replace -the `-E output.elasticsearch.hosts` line with the Cloud ID and elastic password -using this syntax: - -[source,shell] --------------------------------------------- --E cloud.id= \ --E cloud.auth=elastic: --------------------------------------------- - -endif::apm-server[] - -[float] -===== Configure {beatname_uc} on Docker - -The Docker image provides several methods for configuring {beatname_uc}. The -conventional approach is to provide a configuration file via a volume mount, but -it's also possible to create a custom image with your -configuration included. - -[float] -====== Example configuration file - -Download this example configuration file as a starting point: - -["source","sh",subs="attributes,callouts"] ------------------------------------------------- -curl -L -O {dockerconfig} ------------------------------------------------- - -[float] -====== Volume-mounted configuration - -One way to configure {beatname_uc} on Docker is to provide +{beatname_lc}.docker.yml+ via a volume mount. -With +docker run+, the volume mount can be specified like this. - -ifeval::["{beatname_lc}"=="filebeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \ - --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \ - {dockerimage} {beatname_lc} -e -strict.perms=false \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="journalbeat"] -Make sure you include the path to the host's journal. The path might be -`/var/log/journal` or `/run/log/journal`. - -["source", "sh", subs="attributes"] --------------------------------------------- -sudo docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="/var/log/journal:/var/log/journal" \ - --volume="/etc/machine-id:/etc/machine-id" \ - --volume="/run/systemd:/run/systemd" \ - --volume="/etc/hostname:/etc/hostname:ro" \ - {dockerimage} {beatname_lc} -e -strict.perms=false \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="metricbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \ - --volume="/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro" \ - --volume="/proc:/hostfs/proc:ro" \ - --volume="/:/hostfs:ro" \ - {dockerimage} {beatname_lc} -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="packetbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user={beatname_lc} \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --cap-add="NET_RAW" \ - --cap-add="NET_ADMIN" \ - --network=host \ - {dockerimage} \ - --strict.perms=false -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="auditbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user=root \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - --cap-add="AUDIT_CONTROL" \ - --cap-add="AUDIT_READ" \ - --pid=host \ - {dockerimage} -e \ - --strict.perms=false \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="heartbeat"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - --name={beatname_lc} \ - --user={beatname_lc} \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - {dockerimage} \ - --strict.perms=false -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -ifeval::["{beatname_lc}"=="apm-server"] -["source", "sh", subs="attributes"] --------------------------------------------- -docker run -d \ - -p 8200:8200 \ - --name={beatname_lc} \ - --user={beatname_lc} \ - --volume="$(pwd)/{beatname_lc}.docker.yml:/usr/share/{beatname_lc}/{beatname_lc}.yml:ro" \ - {dockerimage} \ - --strict.perms=false -e \ - -E output.elasticsearch.hosts=["elasticsearch:9200"] <1> <2> --------------------------------------------- -endif::[] - -<1> Substitute your {es} hosts and ports. -<2> If you are using the hosted {ess} in {ecloud}, replace -the `-E output.elasticsearch.hosts` line with the Cloud ID and elastic password -using the syntax shown earlier. - -[float] -====== Customize your configuration - -ifdef::has_docker_label_ex[] -The +{beatname_lc}.docker.yml+ file you downloaded earlier is configured to deploy {beats} modules based on the Docker labels applied to your containers. See <> for more details. Add labels to your application Docker containers, and they will be picked up by the {beats} autodiscover feature when they are deployed. Here is an example command for an Apache HTTP Server container with labels to configure the {filebeat} and {metricbeat} modules for the Apache HTTP Server: - -["source", "sh", subs="attributes"] --------------------------------------------- -docker run \ - --label co.elastic.logs/module=apache2 \ - --label co.elastic.logs/fileset.stdout=access \ - --label co.elastic.logs/fileset.stderr=error \ - --label co.elastic.metrics/module=apache \ - --label co.elastic.metrics/metricsets=status \ - --label co.elastic.metrics/hosts='${data.host}:${data.port}' \ - --detach=true \ - --name my-apache-app \ - -p 8080:80 \ - httpd:2.4 --------------------------------------------- -endif::[] - -ifndef::has_docker_label_ex[] -The +{beatname_lc}.docker.yml+ downloaded earlier should be customized for your environment. See <> for more details. Edit the configuration file and customize it to match your environment then re-deploy your {beatname_uc} container. -endif::[] - -[float] -====== Custom image configuration - -It's possible to embed your {beatname_uc} configuration in a custom image. -Here is an example Dockerfile to achieve this: - -ifeval::["{beatname_lc}"!="auditbeat"] - -["source", "dockerfile", subs="attributes"] --------------------------------------------- -FROM {dockerimage} -COPY --chmod=0644 --chown=1000:1000 {beatname_lc}.yml /usr/share/{beatname_lc}/{beatname_lc}.yml --------------------------------------------- - -endif::[] - -ifeval::["{beatname_lc}"=="auditbeat"] - -["source", "dockerfile", subs="attributes"] --------------------------------------------- -FROM {dockerimage} -COPY {beatname_lc}.yml /usr/share/{beatname_lc}/{beatname_lc}.yml --------------------------------------------- - -endif::[] diff --git a/docs/shared-kibana-endpoint.asciidoc b/docs/shared-kibana-endpoint.asciidoc deleted file mode 100644 index e72315901fa..00000000000 --- a/docs/shared-kibana-endpoint.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -// tag::shared-kibana-config[] -APM Server uses the APM integration to set up and manage APM templates, policies, and pipelines. -To confirm the integration is installed, APM Server polls either {es} or {kib} on startup. -When using a non-{es} output, APM Server requires access to {kib} via the -<>. - -Example configuration: - -[source,yaml] ----- -apm-server: - kibana: - enabled: true - host: "https://..." - username: "elastic" - password: "xxx" ----- -// end::shared-kibana-config[] \ No newline at end of file diff --git a/docs/shared-ssl-config.asciidoc b/docs/shared-ssl-config.asciidoc deleted file mode 100644 index de79a1103da..00000000000 --- a/docs/shared-ssl-config.asciidoc +++ /dev/null @@ -1,483 +0,0 @@ -[[configuration-ssl]] -== SSL/TLS output settings - -**** -image:./binary-yes-fm-no.svg[supported deployment methods] - -These configuration options are only relevant to APM Server binary users. Fleet-managed users should see the {fleet-guide}/fleet-settings.html[Fleet output settings]. -**** - -You can specify SSL/TLS options with any output that supports SSL, like {es}, {ls}, or Kafka. -Example output config with SSL/TLS enabled: - -[source,yaml] ----- -output.elasticsearch.hosts: ["https://192.168.1.42:9200"] -output.elasticsearch.ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] -output.elasticsearch.ssl.certificate: "/etc/pki/client/cert.pem" -output.elasticsearch.ssl.key: "/etc/pki/client/cert.key" ----- - -There are a number of SSL/TLS configuration options available to you: - -* <> -* <> -* <> - -[discrete] -[[ssl-common-config]] -=== Common configuration options - -Common SSL configuration options can be used in both client and server configurations. -You can specify the following options in the `ssl` section of each subsystem that -supports SSL. - -[float] -[[enabled]] -==== `enabled` - -To disable SSL configuration, set the value to `false`. The default value is `true`. - -[NOTE] -===== -SSL settings are disabled if either `enabled` is set to `false` or the -`ssl` section is missing. -===== - -[float] -[[supported-protocols]] -==== `supported_protocols` - -List of allowed SSL/TLS versions. If SSL/TLS server decides for protocol versions -not configured, the connection will be dropped during or after the handshake. The -setting is a list of allowed protocol versions: -`SSLv3`, `TLSv1` for TLS version 1.0, `TLSv1.0`, `TLSv1.1`, `TLSv1.2`, and -`TLSv1.3`. - -The default value is `[TLSv1.1, TLSv1.2, TLSv1.3]`. - -[float] -[[cipher-suites]] -==== `cipher_suites` - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library's https://golang.org/pkg/crypto/tls/[default suites] -are used (recommended). Note that TLS 1.3 cipher suites are not -individually configurable in Go, so they are not included in this list. - -// tag::cipher_suites[] -The following cipher suites are available: - -// lint disable -[options="header"] -|=== -| Cypher | Notes -| ECDHE-ECDSA-AES-128-CBC-SHA | -| ECDHE-ECDSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. -| ECDHE-ECDSA-AES-128-GCM-SHA256 | TLS 1.2 only. -| ECDHE-ECDSA-AES-256-CBC-SHA | -| ECDHE-ECDSA-AES-256-GCM-SHA384 | TLS 1.2 only. -| ECDHE-ECDSA-CHACHA20-POLY1305 | TLS 1.2 only. -| ECDHE-ECDSA-RC4-128-SHA | Disabled by default. RC4 not recommended. -| ECDHE-RSA-3DES-CBC3-SHA | -| ECDHE-RSA-AES-128-CBC-SHA | -| ECDHE-RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. -| ECDHE-RSA-AES-128-GCM-SHA256 | TLS 1.2 only. -| ECDHE-RSA-AES-256-CBC-SHA | -| ECDHE-RSA-AES-256-GCM-SHA384 | TLS 1.2 only. -| ECDHE-RSA-CHACHA20-POLY1205 | TLS 1.2 only. -| ECDHE-RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. -| RSA-3DES-CBC3-SHA | -| RSA-AES-128-CBC-SHA | -| RSA-AES-128-CBC-SHA256 | TLS 1.2 only. Disabled by default. -| RSA-AES-128-GCM-SHA256 | TLS 1.2 only. -| RSA-AES-256-CBC-SHA | -| RSA-AES-256-GCM-SHA384 | TLS 1.2 only. -| RSA-RC4-128-SHA | Disabled by default. RC4 not recommended. -|=== -// lint enable - -Here is a list of acronyms used in defining the cipher suites: - -* 3DES: - Cipher suites using triple DES - -* AES-128/256: - Cipher suites using AES with 128/256-bit keys. - -* CBC: - Cipher using Cipher Block Chaining as block cipher mode. - -* ECDHE: - Cipher suites using Elliptic Curve Diffie-Hellman (DH) ephemeral key exchange. - -* ECDSA: - Cipher suites using Elliptic Curve Digital Signature Algorithm for authentication. - -* GCM: - Galois/Counter mode is used for symmetric key cryptography. - -* RC4: - Cipher suites using RC4. - -* RSA: - Cipher suites using RSA. - -* SHA, SHA256, SHA384: - Cipher suites using SHA-1, SHA-256 or SHA-384. -// end::cipher_suites[] - -[float] -[[curve-types]] -==== `curve_types` - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -The following elliptic curve types are available: - -* P-256 -* P-384 -* P-521 -* X25519 - -[float] -[[ca-sha256]] -==== `ca_sha256` - -This configures a certificate pin that you can use to ensure that a specific certificate is part of the verified chain. - -The pin is a base64 encoded string of the SHA-256 of the certificate. - -NOTE: This check is not a replacement for the normal SSL validation, but it adds additional validation. -If this option is used with `verification_mode` set to `none`, the check will always fail because -it will not receive any verified chains. - -[discrete] -[[ssl-client-config]] -=== Client configuration options - -You can specify the following options in the `ssl` section of each subsystem that -supports SSL. - -[float] -[[client-certificate-authorities]] -==== `certificate_authorities` - -The list of root certificates for verifications is required. -If `certificate_authorities` is self-signed, the host system -needs to trust that CA cert as well. - -By default you can specify a list of files that +{beatname_lc}+ will read, but you -can also embed a certificate directly in the `YAML` configuration: - -[source,yaml] ----- -certificate_authorities: - - | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[client-certificate]] -==== `certificate: "/etc/pki/client/cert.pem"` - -The path to the certificate for SSL client authentication is only required if -`client_authentication` is specified. If the certificate -is not specified, client authentication is not available. The connection -might fail if the server requests client authentication. If the SSL server does not -require client authentication, the certificate will be loaded, but not requested or used -by the server. - -When this option is configured, the <> option is also required. -The certificate option support embedding of the certificate: - -[source,yaml] ----- -certificate: | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[client-key]] -==== `key: "/etc/pki/client/cert.key"` - -The client certificate key used for client authentication and is only required -if `client_authentication` is configured. The key option support embedding of the private key: - -[source,yaml] ----- -key: | - -----BEGIN PRIVATE KEY----- - MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI - sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP - Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F - KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 - MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z - HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ - nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx - Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 - eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ - Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM - epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve - Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn - BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 - VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU - zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 - GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA - 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 - TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF - hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li - e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze - Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T - kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ - kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav - NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K - 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc - nygO9KTJuUiBrLr0AHEnqko= - -----END PRIVATE KEY----- ----- - -[float] -[[client-key-passphrase]] -==== `key_passphrase` - -The passphrase used to decrypt an encrypted key stored in the configured `key` file. - - -[float] -[[client-verification-mode]] -==== `verification_mode` - -Controls the verification of server certificates. Valid values are: - -`full`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. - -`strict`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. If the Subject Alternative -Name is empty, it returns an error. - -`certificate`:: -Verifies that the provided certificate is signed by a -trusted authority (CA), but does not perform any hostname verification. - -`none`:: -Performs _no verification_ of the server's certificate. This -mode disables many of the security benefits of SSL/TLS and should only be used -after cautious consideration. It is primarily intended as a temporary -diagnostic mechanism when attempting to resolve TLS errors; its use in -production environments is strongly discouraged. -+ -The default value is `full`. - -[discrete] -[[ssl-server-config]] -=== Server configuration options - -You can specify the following options in the `ssl` section of each subsystem that -supports SSL. - -[float] -[[server-certificate-authorities]] -==== `certificate_authorities` - -The list of root certificates for client verifications is only required if -`client_authentication` is configured. If `certificate_authorities` is empty or not set, and -`client_authentication` is configured, the system keystore is used. - -If `certificate_authorities` is self-signed, the host system needs to trust that CA cert as well. -By default you can specify a list of files that +{beatname_lc}+ will read, but you can also embed a certificate -directly in the `YAML` configuration: - -[source,yaml] ----- -certificate_authorities: - - | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[server-certificate]] -==== `certificate: "/etc/pki/server/cert.pem"` - -For server authentication, the path to the SSL authentication certificate must -be specified for TLS. If the certificate is not specified, startup will fail. - -When this option is configured, the <> option is also required. -The certificate option support embedding of the certificate: - -[source,yaml] ----- -certificate: | - -----BEGIN CERTIFICATE----- - MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF - ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 - MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB - BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n - fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl - 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t - /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP - PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 - CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O - BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux - 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D - 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw - 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA - H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu - 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 - yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk - sxSmbIUfc2SGJGCJD4I= - -----END CERTIFICATE----- ----- - -[float] -[[server-key]] -==== `key: "/etc/pki/server/cert.key"` - -The server certificate key used for authentication is required. -The key option support embedding of the private key: - -[source,yaml] ----- -key: | - -----BEGIN PRIVATE KEY----- - MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDXHufGPycpCOfI - sjl6cRn8NP4DLxdIVEAHFK0jMRDup32UQOPW+DleEsFpgN9/ebi9ngdjQfMvKnUP - Zrl1HTwVhOJfazGeoJn7vdDeQebhJfeDXHwX2DiotXyUPYu1ioU45UZDAoAZFj5F - KJLwWRUbfEbRe8yO+wUhKKxxkApPbfw+wUtBicn1RIX7W1nBRABt1UXKDIRe5FM2 - MKfqhEqK4hUWC3g1r+vGTrxu3qFpzz7L2UrRFRIpo7yuTUhEhEGvcVsiTppTil4Z - HcprXFHf5158elEwhYJ5IM0nU1leNQiOgemifbLwkyNkLqCKth8V/4sezr1tYblZ - nMh1cclBAgMBAAECggEBAKdP5jyOicqknoG9/G564RcDsDyRt64NuO7I6hBg7SZx - Jn7UKWDdFuFP/RYtoabn6QOxkVVlydp5Typ3Xu7zmfOyss479Q/HIXxmmbkD0Kp0 - eRm2KN3y0b6FySsS40KDRjKGQCuGGlNotW3crMw6vOvvsLTlcKgUHF054UVCHoK/ - Piz7igkDU7NjvJeha53vXL4hIjb10UtJNaGPxIyFLYRZdRPyyBJX7Yt3w8dgz8WM - epOPu0dq3bUrY3WQXcxKZo6sQjE1h7kdl4TNji5jaFlvD01Y8LnyG0oThOzf0tve - Gaw+kuy17gTGZGMIfGVcdeb+SlioXMAAfOps+mNIwTECgYEA/gTO8W0hgYpOQJzn - BpWkic3LAoBXWNpvsQkkC3uba8Fcps7iiEzotXGfwYcb5Ewf5O3Lrz1EwLj7GTW8 - VNhB3gb7bGOvuwI/6vYk2/dwo84bwW9qRWP5hqPhNZ2AWl8kxmZgHns6WTTxpkRU - zrfZ5eUrBDWjRU2R8uppgRImsxMCgYEA2MxuL/C/Ko0d7XsSX1kM4JHJiGpQDvb5 - GUrlKjP/qVyUysNF92B9xAZZHxxfPWpdfGGBynhw7X6s+YeIoxTzFPZVV9hlkpAA - 5igma0n8ZpZEqzttjVdpOQZK8o/Oni/Q2S10WGftQOOGw5Is8+LY30XnLvHBJhO7 - TKMurJ4KCNsCgYAe5TDSVmaj3dGEtFC5EUxQ4nHVnQyCpxa8npL+vor5wSvmsfUF - hO0s3GQE4sz2qHecnXuPldEd66HGwC1m2GKygYDk/v7prO1fQ47aHi9aDQB9N3Li - e7Vmtdn3bm+lDjtn0h3Qt0YygWj+wwLZnazn9EaWHXv9OuEMfYxVgYKpdwKBgEze - Zy8+WDm5IWRjn8cI5wT1DBT/RPWZYgcyxABrwXmGZwdhp3wnzU/kxFLAl5BKF22T - kRZ+D+RVZvVutebE9c937BiilJkb0AXLNJwT9pdVLnHcN2LHHHronUhV7vetkop+ - kGMMLlY0lkLfoGq1AxpfSbIea9KZam6o6VKxEnPDAoGAFDCJm+ZtsJK9nE5GEMav - NHy+PwkYsHhbrPl4dgStTNXLenJLIJ+Ke0Pcld4ZPfYdSyu/Tv4rNswZBNpNsW9K - 0NwJlyMBfayoPNcJKXrH/csJY7hbKviAHr1eYy9/8OL0dHf85FV+9uY5YndLcsDc - nygO9KTJuUiBrLr0AHEnqko= - -----END PRIVATE KEY----- ----- - -[float] -[[server-key-passphrase]] -==== `key_passphrase` - -The passphrase is used to decrypt an encrypted key stored in the configured `key` file. - -[float] -[[server-verification-mode]] -==== `verification_mode` - -Controls the verification of client certificates. Valid values are: - -`full`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. - -`strict`:: -Verifies that the provided certificate is signed by a trusted -authority (CA) and also verifies that the server's hostname (or IP address) -matches the names identified within the certificate. If the Subject Alternative -Name is empty, it returns an error. - -`certificate`:: -Verifies that the provided certificate is signed by a -trusted authority (CA), but does not perform any hostname verification. - -`none`:: -Performs _no verification_ of the server's certificate. This -mode disables many of the security benefits of SSL/TLS and should only be used -after cautious consideration. It is primarily intended as a temporary -diagnostic mechanism when attempting to resolve TLS errors; its use in -production environments is strongly discouraged. -+ -The default value is `full`. - -[float] -[[server-renegotiation]] -==== `renegotiation` - -This configures what types of TLS renegotiation are supported. The valid options -are: - -`never`:: -Disables renegotiation. - -`once`:: -Allows a remote server to request renegotiation once per connection. - -`freely`:: -Allows a remote server to request renegotiation repeatedly. -+ -The default value is `never`. diff --git a/docs/shared-ssl-logstash-config.asciidoc b/docs/shared-ssl-logstash-config.asciidoc deleted file mode 100644 index 056d04a421b..00000000000 --- a/docs/shared-ssl-logstash-config.asciidoc +++ /dev/null @@ -1,144 +0,0 @@ -////////////////////////////////////////////////////////////////////////// -//// This content is shared by all Elastic Beats. Make sure you keep the -//// descriptions here generic enough to work for all Beats that include -//// this file. When using cross references, make sure that the cross -//// references resolve correctly for any files that include this one. -//// Use the appropriate variables defined in the index.asciidoc file to -//// resolve Beat names: beatname_uc and beatname_lc. -//// Use the following include to pull this content into a doc file: -//// include::../../libbeat/docs/shared-ssl-logstash-config.asciidoc[] -////////////////////////////////////////////////////////////////////////// - -[float] -[[configuring-ssl-logstash]] -== Secure communication with {ls} - -You can use SSL mutual authentication to secure connections between {beatname_uc} and {ls}. This ensures that -{beatname_uc} sends encrypted data to trusted {ls} servers only, and that the {ls} server receives data from -trusted {beatname_uc} clients only. - -To use SSL mutual authentication: - -. Create a certificate authority (CA) and use it to sign the certificates that you plan to use for -{beatname_uc} and {ls}. Creating a correct SSL/TLS infrastructure is outside the scope of this -document. There are many online resources available that describe how to create certificates. -+ -TIP: If you are using {security-features}, you can use the -{ref}/certutil.html[`elasticsearch-certutil` tool] to generate certificates. - -. Configure {beatname_uc} to use SSL. In the +{beatname_lc}.yml+ config file, specify the following settings under -`ssl`: -+ -* `certificate_authorities`: Configures {beatname_uc} to trust any certificates signed by the specified CA. If -`certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. - -* `certificate` and `key`: Specifies the certificate and key that {beatname_uc} uses to authenticate with -{ls}. -+ -For example: -+ -[source,yaml] ------------------------------------------------------------------------------- -output.logstash: - hosts: ["logs.mycompany.com:5044"] - ssl.certificate_authorities: ["/etc/ca.crt"] - ssl.certificate: "/etc/client.crt" - ssl.key: "/etc/client.key" ------------------------------------------------------------------------------- -+ -For more information about these configuration options, see <>. - -. Configure {ls} to use SSL. In the {ls} config file, specify the following settings for the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[{beats} input plugin for {ls}]: -+ -* `ssl`: When set to true, enables {ls} to use SSL/TLS. -* `ssl_certificate_authorities`: Configures {ls} to trust any certificates signed by the specified CA. -* `ssl_certificate` and `ssl_key`: Specify the certificate and key that {ls} uses to authenticate with the client. -* `ssl_verify_mode`: Specifies whether the {ls} server verifies the client certificate against the CA. You -need to specify either `peer` or `force_peer` to make the server ask for the certificate and validate it. If you -specify `force_peer`, and {beatname_uc} doesn't provide a certificate, the {ls} connection will be closed. If you choose not to use {ref}/certutil.html[`certutil`], the certificates that you obtain must allow for both `clientAuth` and `serverAuth` if the extended key usage extension is present. -+ -For example: -+ -[source,json] ------------------------------------------------------------------------------- -input { - beats { - port => 5044 - ssl => true - ssl_certificate_authorities => ["/etc/ca.crt"] - ssl_certificate => "/etc/server.crt" - ssl_key => "/etc/server.key" - ssl_verify_mode => "force_peer" - } -} ------------------------------------------------------------------------------- -+ -For more information about these options, see the -https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html[documentation for the {beats} input plugin]. - -[float] -[[testing-ssl-logstash]] -=== Validate the {ls} server's certificate - -Before running {beatname_uc}, you should validate the {ls} server's certificate. You can use `curl` to validate the certificate even though the protocol used to communicate with {ls} is not based on HTTP. For example: - -[source,shell] ------------------------------------------------------------------------------- -curl -v --cacert ca.crt https://logs.mycompany.com:5044 ------------------------------------------------------------------------------- - -If the test is successful, you'll receive an empty response error: - -[source,shell] ------------------------------------------------------------------------------- -* Rebuilt URL to: https://logs.mycompany.com:5044/ -* Trying 192.168.99.100... -* Connected to logs.mycompany.com (192.168.99.100) port 5044 (#0) -* TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA -* Server certificate: logs.mycompany.com -* Server certificate: mycompany.com -> GET / HTTP/1.1 -> Host: logs.mycompany.com:5044 -> User-Agent: curl/7.43.0 -> Accept: */* -> -* Empty reply from server -* Connection #0 to host logs.mycompany.com left intact -curl: (52) Empty reply from server ------------------------------------------------------------------------------- - -The following example uses the IP address rather than the hostname to validate the certificate: - -[source,shell] ------------------------------------------------------------------------------- -curl -v --cacert ca.crt https://192.168.99.100:5044 ------------------------------------------------------------------------------- - -Validation for this test fails because the certificate is not valid for the specified IP address. It's only valid for the `logs.mycompany.com`, the hostname that appears in the Subject field of the certificate. - -[source,shell] ------------------------------------------------------------------------------- -* Rebuilt URL to: https://192.168.99.100:5044/ -* Trying 192.168.99.100... -* Connected to 192.168.99.100 (192.168.99.100) port 5044 (#0) -* WARNING: using IP address, SNI is being disabled by the OS. -* SSL: certificate verification failed (result: 5) -* Closing connection 0 -curl: (51) SSL: certificate verification failed (result: 5) ------------------------------------------------------------------------------- - -See the <> for info about resolving this issue. - -[float] -=== Test the {beatname_uc} to {ls} connection - -If you have {beatname_uc} running as a service, first stop the service. Then test your setup by running {beatname_uc} in -the foreground so you can quickly see any errors that occur: - -["source","sh",subs="attributes,callouts"] ------------------------------------------------------------------------------- -{beatname_lc} -c {beatname_lc}.yml -e -v ------------------------------------------------------------------------------- - -Any errors will be printed to the console. See the <> for info about -resolving common errors. diff --git a/docs/shared-systemd.asciidoc b/docs/shared-systemd.asciidoc deleted file mode 100644 index 6dea935123c..00000000000 --- a/docs/shared-systemd.asciidoc +++ /dev/null @@ -1,108 +0,0 @@ -[[running-with-systemd]] -=== {beatname_uc} and systemd - -IMPORTANT: These commands only apply to the APM Server binary installation method. -Fleet-managed users should see {fleet-guide}/start-stop-elastic-agent.html[Start and stop {agent}s on edge hosts]. - -The DEB and RPM packages include a service unit for Linux systems with -systemd. On these systems, you can manage {beatname_uc} by using the usual -systemd commands. - -ifdef::apm-server[] -We recommend that the {beatname_pkg} process is run as a non-root user. -Therefore, that is the default setup for {beatname_uc}'s DEB package and RPM installation. -endif::apm-server[] - -[float] -==== Start and stop {beatname_uc} - -Use `systemctl` to start or stop {beatname_uc}: - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl start {beatname_pkg} ------------------------------------------------- - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl stop {beatname_pkg} ------------------------------------------------- - -By default, the {beatname_uc} service starts automatically when the system -boots. To enable or disable auto start use: - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl enable {beatname_pkg} ------------------------------------------------- - -["source", "sh", subs="attributes"] ------------------------------------------------- -sudo systemctl disable {beatname_pkg} ------------------------------------------------- - -[float] -==== {beatname_uc} status and logs - -To get the service status, use `systemctl`: - -["source", "sh", subs="attributes"] ------------------------------------------------- -systemctl status {beatname_pkg} ------------------------------------------------- - -Logs are stored by default in journald. To view the Logs, use `journalctl`: - -["source", "sh", subs="attributes"] ------------------------------------------------- -journalctl -u {beatname_pkg}.service ------------------------------------------------- - -[float] -=== Customize systemd unit for {beatname_uc} - -The systemd service unit file includes environment variables that you can -override to change the default options. - -// lint ignore usr -[cols=">. - -To override these variables, create a drop-in unit file in the -+/etc/systemd/system/{beatname_pkg}.service.d+ directory. - -For example a file with the following content placed in -+/etc/systemd/system/{beatname_pkg}.service.d/debug.conf+ -would override `BEAT_LOG_OPTS` to enable debug for {es} output. - -["source", "systemd", subs="attributes"] ------------------------------------------------- -[Service] -Environment="BEAT_LOG_OPTS=-d elasticsearch" ------------------------------------------------- - -To apply your changes, reload the systemd configuration and restart -the service: - -["source", "sh", subs="attributes"] ------------------------------------------------- -systemctl daemon-reload -systemctl restart {beatname_pkg} ------------------------------------------------- - -NOTE: It is recommended that you use a configuration management tool to -include drop-in unit files. If you need to add a drop-in manually, use -+systemctl edit {beatname_pkg}.service+. - -ifdef::apm-server[] -include::{docdir}/config-ownership.asciidoc[] -endif::apm-server[] diff --git a/docs/source-map-how-to.asciidoc b/docs/source-map-how-to.asciidoc deleted file mode 100644 index c8b03a785d2..00000000000 --- a/docs/source-map-how-to.asciidoc +++ /dev/null @@ -1,177 +0,0 @@ -[[source-map-how-to]] -=== Create and upload source maps (RUM) - -Minifying JavaScript bundles in production is a common practice; -it can greatly improve the load time and network latency of your applications. -The problem with minifying code is that it can be hard to debug. - -For best results, uploading source maps should become a part of your deployment procedure, -and not something you only do when you see unhelpful errors. -That's because uploading source maps after errors happen won't make old errors magically readable — -errors must occur again for source mapping to occur. - -Here's an example of an exception stack trace in the {apm-app} when using minified code. -As you can see, it's not very helpful. - -[role="screenshot"] -image::images/source-map-before.png[{apm-app} without source mapping] - -With a source map, minified files are mapped back to the original source code, -allowing you to maintain the speed advantage of minified code, -without losing the ability to quickly and easily debug your application. -Here's the same example as before, but with a source map uploaded and applied: - -[role="screenshot"] -image::images/source-map-after.png[{apm-app} with source mapping] - -Follow the steps below to enable source mapping your error stack traces in the {apm-app}: - -* <> -* <> -* <> - -[float] -[[source-map-rum-initialize]] -=== Initialize the RUM Agent - -Set the service name and version of your application when initializing the RUM Agent. -To make uploading subsequent source maps easier, the `serviceVersion` you choose might be the -`version` from your `package.json`. For example: - -[source,js] ----- -import { init as initApm } from '@elastic/apm-rum' -const serviceVersion = require("./package.json").version - -const apm = initApm({ - serviceName: 'myService', - serviceVersion: serviceVersion -}) ----- - -Or, `serviceVersion` could be a git commit reference. For example: - -[source,js] ----- -const git = require('git-rev-sync') -const serviceVersion = git.short() ----- - -It can also be any other unique string that indicates a specific version of your application. -The APM integration uses the service name and version to match the correct source map file to each stack trace. - -[float] -[[source-map-rum-generate]] -=== Generate a source map - -To be compatible with Elastic APM, source maps must follow the -https://sourcemaps.info/spec.html[source map revision 3 proposal spec]. - -Source maps can be generated and configured in many different ways. -For example, parcel automatically generates source maps by default. -If you're using webpack, some configuration may be needed to generate a source map: - -[source,js] ----- -const webpack = require('webpack') -const serviceVersion = require("./package.json").version <1> -const TerserPlugin = require('terser-webpack-plugin'); -module.exports = { - entry: 'app.js', - output: { - filename: 'app.min.js', - path: './dist' - }, - devtool: 'source-map', - plugins: [ - new webpack.DefinePlugin({'serviceVersion': JSON.stringify(serviceVersion)}), - new TerserPlugin({ - sourceMap: true - }) - ] -} ----- -<1> If you're using a different method of defining `serviceVersion`, you can set it here. - -[float] -[[source-map-rum-upload]] -=== Upload the source map - -TIP: When uploading a source map, ensure that RUM support is enabled in the APM integration. - -{kib} exposes a {kibana-ref}/rum-sourcemap-api.html[source map endpoint] for uploading source maps. -Source maps can be uploaded as a string, or as a file upload. - -Let's look at two different ways to upload a source map: curl and a custom application. -Each example includes the four fields necessary for APM Server to later map minified code to its source: - -* `service_name`: Should match the `serviceName` from step one. -* `service_version`: Should match the `serviceVersion` from step one. -* `bundle_filepath`: The absolute path of the final bundle as used in the web application. -* `sourcemap`: The location of the source map. - -If you have multiple source maps, you'll need to upload each individually. - -[float] -[[source-map-curl]] -==== Upload via curl - -Here’s an example curl request that uploads the source map file created in the previous step. -This request uses an API key for authentication. - -[source,console] ----- -SERVICEVERSION=`node -e "console.log(require('./package.json').version);"` && \ <1> -curl -X POST "http://localhost:5601/api/apm/sourcemaps" \ --H 'Content-Type: multipart/form-data' \ --H 'kbn-xsrf: true' \ --H 'Authorization: ApiKey ${YOUR_API_KEY}' \ <2> --F 'service_name=foo' \ --F 'service_version=$SERVICEVERSION' \ --F 'bundle_filepath=/test/e2e/general-usecase/app.min.js' \ --F 'sourcemap=@./dist/app.min.js.map' ----- -<1> This example uses the version from `package.json` -<2> The API key used here needs to have appropriate {kibana-ref}/rum-sourcemap-api.html[privileges] - -[float] -[[source-map-custom-app]] -==== Upload via a custom app - -To ensure uploading source maps become a part of your deployment process, -consider automating the process with a custom application. -Here's an example Node.js application that uploads the source map file created in the previous step: - -[source,js] ----- -console.log('Uploading sourcemaps!') -var request = require('request') -var filepath = './dist/app.min.js.map' -var formData = { - headers: { - Content-Type: 'multipart/form-data', - kbn-xsrf: 'true', - Authorization: 'ApiKey ${YOUR_API_KEY}' - }, - service_name: 'service-name’, - service_version: require("./package.json").version, // Or use 'git-rev-sync' for git commit hash - bundle_filepath: 'http://localhost/app.min.js', - sourcemap: fs.createReadStream(filepath) -} -request.post({url: 'http://localhost:5601/api/apm/sourcemaps',formData: formData}, function (err, resp, body) { - if (err) { - console.log('Error while uploading sourcemaps!', err) - } else { - console.log('Sourcemaps uploaded!') - } -}) ----- - -[float] -[[source-map-next]] -=== What happens next - -Source maps are stored in {es}. When you upload a source map, a new {es} document is created -containing the contents of the source map. -When a RUM request comes in, APM Server will make use of these source map documents to apply the -source map logic to the event's stack traces. diff --git a/docs/span-compression.asciidoc b/docs/span-compression.asciidoc deleted file mode 100644 index 74ffa75d0e3..00000000000 --- a/docs/span-compression.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[[span-compression]] -=== Span compression - -In some cases, APM agents may collect large amounts of very similar or identical spans in a transaction. -For example, this can happen if spans are captured inside of a loop, or in unoptimized SQL queries that use multiple queries instead of joins to fetch related data. -In such cases, the upper limit of spans per transaction (by default, 500 spans) can be reached quickly, causing the agent to stop capturing potentially more relevant spans for a given transaction. - -Such repeated similar spans often aren't very relevant for themselves, especially if they are of very short duration. -They also can clutter the UI, and cause processing and storage overhead. - -To address this problem, the APM agents can compress such spans into a single span. -The compressed span retains most of the original span information, such as overall duration and the number of spans it represents. - -Regardless of the compression strategy, a span is eligible for compression if: - -- It has not propagated its trace context. -- Is an _exit_ span (such as database query spans). -- Its outcome is not `"failure"`. - - -[float] -[[span-compression-strategy]] -=== Compression strategies - -The {apm-agent} can select between two strategies to decide if two adjacent spans can be compressed. -Both strategies have the benefit that only one previous span needs to be kept in memory. -This is important to ensure that the agent doesn't require large amounts of memory to enable span compression. - -[float] -[[span-compression-same]] -==== Same-Kind strategy - -The agent selects this strategy if two adjacent spans have the same: - - * span type - * span subtype - * `destination.service.resource` (e.g. database name) - -[float] -[[span-compression-exact]] -==== Exact-Match strategy - -The agent selects this strategy if two adjacent spans have the same: - - * span name - * span type - * span subtype - * `destination.service.resource` (e.g. database name) - -[float] -[[span-compression-settings]] -=== Settings - -The agent has configuration settings to define upper thresholds in terms of span duration for both strategies. -For the "Same-Kind" strategy, the default limit is 0 milliseconds, which means that the "Same-Kind" strategy is disabled by default. For the "Exact-Match" strategy, the default limit is 50 milliseconds. -Spans with longer duration are not compressed. Please refer to the agent documentation for specifics. - -[float] -[[span-compression-support]] -=== Agent support - -Support for span compression is available in these agents: - -[options="header"] -|==== -| Agent | Same-kind config | Exact-match config -| **Go agent** -| {apm-go-ref-v}/configuration.html#config-span-compression-same-kind-duration[`ELASTIC_APM_SPAN_COMPRESSION_SAME_KIND_MAX_DURATION`] -| {apm-go-ref-v}/configuration.html#config-span-compression-exact-match-duration[`ELASTIC_APM_SPAN_COMPRESSION_EXACT_MATCH_MAX_DURATION`] -| **Java agent** -| {apm-java-ref-v}/config-huge-traces.html#config-span-compression-same-kind-max-duration[`span_compression_same_kind_max_duration`] -| {apm-java-ref-v}/config-huge-traces.html#config-span-compression-exact-match-max-duration[`span_compression_exact_match_max_duration`] -| **.NET agent** -| {apm-dotnet-ref-v}/config-core.html#config-span-compression-same-kind-max-duration[`SpanCompressionSameKindMaxDuration`] -| {apm-dotnet-ref-v}/config-core.html#config-span-compression-exact-match-max-duration[`SpanCompressionExactMatchMaxDuration`] -| **Node.js agent** -| {apm-node-ref-v}/configuration.html#span-compression-same-kind-max-duration[`spanCompressionSameKindMaxDuration`] -| {apm-node-ref-v}/configuration.html#span-compression-exact-match-max-duration[`spanCompressionExactMatchMaxDuration`] -// | **PHP agent** -// | {apm-php-ref-v}[``] -// | {apm-php-ref-v}[``] -| **Python agent** -| {apm-py-ref-v}/configuration.html#config-span-compression-same-kind-max-duration[`span_compression_same_kind_max_duration`] -| {apm-py-ref-v}/configuration.html#config-span-compression-exact-match-max_duration[`span_compression_exact_match_max_duration`] -// | **Ruby agent** -// | {apm-ruby-ref-v}[``] -// | {apm-ruby-ref-v}[``] -|==== diff --git a/docs/ssl-input-settings.asciidoc b/docs/ssl-input-settings.asciidoc deleted file mode 100644 index 987de726a9e..00000000000 --- a/docs/ssl-input-settings.asciidoc +++ /dev/null @@ -1,127 +0,0 @@ -[[agent-server-ssl]] -=== SSL/TLS input settings - -**** -image:./binary-yes-fm-yes.svg[supported deployment methods] - -Most options on this page are supported by all APM Server deployment methods. -**** - -These settings apply to SSL/TLS communication between the APM Server and APM Agents. -See <> to learn more. - -include::{tab-widget-dir}/tls-widget.asciidoc[] - -[float] -==== Enable TLS - -Enable or disable TLS. Disabled by default. - -|==== -| APM Server binary | `apm-server.ssl.enabled` -| Fleet-managed | `Enable TLS` -|==== - -[float] -==== File path to server certificate - -The path to the file containing the certificate for Server authentication. -Required if TLS is enabled. - -|==== -| APM Server binary | `apm-server.ssl.certificate` -| Fleet-managed | `File path to server certificate` -|==== - -[float] -==== File path to server certificate key - -The path to the file containing the Server certificate key. -Required if TLS is enabled. - -|==== -| APM Server binary | `apm-server.ssl.key` -| Fleet-managed | `File path to server certificate key` -|==== - -[float] -==== Key passphrase - -The passphrase used to decrypt an encrypted key stored in the configured `apm-server.ssl.key` file. - -|==== -| APM Server binary | `apm-server.ssl.key_passphrase` -| Fleet-managed | N/A -|==== - -[float] -==== Supported protocol versions - -This setting is a list of allowed protocol versions: -`SSLv3`, `TLSv1.0`, `TLSv1.1`, `TLSv1.2` and `TLSv1.3`. We do not recommend using `SSLv3` or `TLSv1.0`. -The default value is `[TLSv1.1, TLSv1.2, TLSv1.3]`. - -|==== -| APM Server binary | `apm-server.ssl.supported_protocols` -| Fleet-managed | `Supported protocol versions` -|==== - -[float] -==== Cipher suites for TLS connections - -The list of cipher suites to use. The first entry has the highest priority. -If this option is omitted, the Go crypto library's https://golang.org/pkg/crypto/tls/[default suites] -are used (recommended). Note that TLS 1.3 cipher suites are not -individually configurable in Go, so they are not included in this list. - -|==== -| APM Server binary | `apm-server.ssl.cipher_suites` -| Fleet-managed | `Cipher suites for TLS connections` -|==== - -include::{docdir}/shared-ssl-config.asciidoc[tag=cipher_suites] - -[float] -==== Curve types for ECDHE based cipher suites - -The list of curve types for ECDHE (Elliptic Curve Diffie-Hellman ephemeral key exchange). - -|==== -| APM Server binary | `apm-server.ssl.curve_types` -| Fleet-managed | `Curve types for ECDHE based cipher suites` -|==== - -[float] -==== List of root certificates for verifying client certificates - -The list of root certificates for verifying client certificates. -If `certificate_authorities` is empty or not set, the trusted certificate authorities of the host system are used. -If `certificate_authorities` is set, `client_authentication` will be automatically set to `required`. -Sending client certificates is currently only supported by the RUM agent through the browser, -the Java agent (see {apm-java-ref-v}/ssl-configuration.html[Agent certificate authentication]), -and the Jaeger agent. - -|==== -| APM Server binary | `apm-server.ssl.certificate_authorities` -| Fleet-managed | N/A -|==== - -[float] -==== Client authentication - -This configures what types of client authentication are supported. The valid options -are `none`, `optional`, and `required`. The default is `none`. -If `certificate_authorities` has been specified, this setting will automatically change to `required`. -This option only needs to be configured when the agent is expected to provide a client certificate. -Sending client certificates is currently only supported by the RUM agent through the browser, -the Java agent (see {apm-java-ref-v}/ssl-configuration.html[Agent certificate authentication]), -and the Jaeger agent. - -* `none` - Disables client authentication. -* `optional` - When a client certificate is given, the server will verify it. -* `required` - Requires clients to provide a valid certificate. - -|==== -| APM Server binary | `apm-server.ssl.client_authentication` -| Fleet-managed | N/A -|==== diff --git a/docs/tab-widgets/anonymous-auth-widget.asciidoc b/docs/tab-widgets/anonymous-auth-widget.asciidoc deleted file mode 100644 index 9c4b0c06d90..00000000000 --- a/docs/tab-widgets/anonymous-auth-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::anonymous-auth.asciidoc[tag=fleet-managed] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/anonymous-auth.asciidoc b/docs/tab-widgets/anonymous-auth.asciidoc deleted file mode 100644 index 73f6db156bf..00000000000 --- a/docs/tab-widgets/anonymous-auth.asciidoc +++ /dev/null @@ -1,29 +0,0 @@ -// tag::fleet-managed[] -When an <> or <> is configured, -anonymous authentication must be enabled to collect RUM data. -Set **Anonymous Agent access** to true to enable anonymous authentication. - -When configuring anonymous authentication for client-side services, -there are a few configuration variables that can mitigate the impact of malicious requests to an -unauthenticated APM Server endpoint. - -Use the **Allowed anonymous agents** and **Allowed anonymous services** configs to ensure that the -`agent.name` and `service.name` of each incoming request match a specified list. - -Additionally, the APM Server can rate-limit unauthenticated requests based on the client IP address -(`client.ip`) of the request. -This allows you to specify the maximum number of requests allowed per unique IP address, per second. -// end::fleet-managed[] - -// tag::binary[] -When an <> or <> is configured, -anonymous authentication must be enabled to collect RUM data. -To enable anonymous access, set either <> or -<> to `true`. - -Because anyone can send anonymous events to the APM Server, -additional configuration variables are available to rate limit the number anonymous events the APM Server processes; -throughput is equal to the `rate_limit.ip_limit` times the `rate_limit.event_limit`. - -See <> for a complete list of options and a sample configuration file. -// end::binary[] \ No newline at end of file diff --git a/docs/tab-widgets/api-key-widget.asciidoc b/docs/tab-widgets/api-key-widget.asciidoc deleted file mode 100644 index ab74e730025..00000000000 --- a/docs/tab-widgets/api-key-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::api-key.asciidoc[tag=fleet-managed] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/api-key.asciidoc b/docs/tab-widgets/api-key.asciidoc deleted file mode 100644 index a83ed72c201..00000000000 --- a/docs/tab-widgets/api-key.asciidoc +++ /dev/null @@ -1,25 +0,0 @@ -// tag::fleet-managed[] -Enable API key authorization in the <>. -You should also set a limit on the number of unique API keys that APM Server allows per minute; -this value should be the number of unique API keys configured in your monitored services. -// end::fleet-managed[] - -// tag::binary[] -API keys are disabled by default. Enable and configure this feature in the `apm-server.auth.api_key` -section of the +{beatname_lc}.yml+ configuration file. - -At a minimum, you must enable API keys, -and should set a limit on the number of unique API keys that APM Server allows per minute. -Here's an example `apm-server.auth.api_key` config using 50 unique API keys: - -[source,yaml] ----- -apm-server.auth.api_key.enabled: true <1> -apm-server.auth.api_key.limit: 50 <2> ----- -<1> Enables API keys -<2> Restricts the number of unique API keys that {es} allows each minute. -This value should be the number of unique API keys configured in your monitored services. - -All other configuration options are described in <>. -// end::binary[] \ No newline at end of file diff --git a/docs/tab-widgets/directory-layout-widget.asciidoc b/docs/tab-widgets/directory-layout-widget.asciidoc deleted file mode 100644 index e92c7169342..00000000000 --- a/docs/tab-widgets/directory-layout-widget.asciidoc +++ /dev/null @@ -1,59 +0,0 @@ -++++ -
-
- - - -
- -
-++++ - -include::directory-layout.asciidoc[tag=docker] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/directory-layout.asciidoc b/docs/tab-widgets/directory-layout.asciidoc deleted file mode 100644 index 9586af2f626..00000000000 --- a/docs/tab-widgets/directory-layout.asciidoc +++ /dev/null @@ -1,58 +0,0 @@ -// tag::zip[] - -[cols=" -
- - - - - - - - -
- - -
-++++ - -include::distributed-trace-receive.asciidoc[tag=java] - -++++ -
- - - - - - -++++ \ No newline at end of file diff --git a/docs/tab-widgets/distributed-trace-receive.asciidoc b/docs/tab-widgets/distributed-trace-receive.asciidoc deleted file mode 100644 index ecbeebfaa1f..00000000000 --- a/docs/tab-widgets/distributed-trace-receive.asciidoc +++ /dev/null @@ -1,208 +0,0 @@ -// tag::go[] - -// Need help with this example - -1. Parse the incoming `TraceContext` with -https://pkg.go.dev/go.elastic.co/apm/module/apmhttp/v2#ParseTraceparentHeader[`ParseTraceparentHeader`] or -https://pkg.go.dev/go.elastic.co/apm/module/apmhttp/v2#ParseTracestateHeader[`ParseTracestateHeader`]. - -2. Start a new transaction or span as a child of the incoming transaction with -{apm-go-ref}/api.html#tracer-api-start-transaction-options[`StartTransactionOptions`] or -{apm-go-ref}/api.html#transaction-start-span-options[`StartSpanOptions`]. - -Example: - -[source,go] ----- -// Receive incoming TraceContext -traceContext, _ := apmhttp.ParseTraceparentHeader(r.Header.Get("Traceparent")) <1> -traceContext.State, _ = apmhttp.ParseTracestateHeader(r.Header["Tracestate"]...) <2> - -opts := apm.TransactionOptions{ - TraceContext: traceContext, <3> -} -transaction := apm.DefaultTracer().StartTransactionOptions("GET /", "request", opts) <4> ----- -<1> Parse the `TraceParent` header -<2> Parse the `Tracestate` header -<3> Set the parent trace context -<4> Start a new transaction as a child of the received `TraceContext` - -// end::go[] - -// *************************************************** -// *************************************************** - -// tag::ios[] - -experimental::[] - -_Not applicable._ - -// end::ios[] - -// *************************************************** -// *************************************************** - -// tag::java[] - -1. Create a transaction as a child of the incoming transaction with -{apm-java-ref}/public-api.html#api-transaction-inject-trace-headers[`startTransactionWithRemoteParent()`]. - -2. Start and name the transaction with {apm-java-ref}/public-api.html#api-transaction-activate[`activate()`] -and {apm-java-ref}/public-api.html#api-set-name[`setName()`]. - -Example: - -[source,java] ----- -// Hook into a callback provided by the framework that is called on incoming requests -public Response onIncomingRequest(Request request) throws Exception { - // creates a transaction representing the server-side handling of the request - Transaction transaction = ElasticApm.startTransactionWithRemoteParent(request::getHeader, request::getHeaders); <1> - try (final Scope scope = transaction.activate()) { <2> - String name = "a useful name like ClassName#methodName where the request is handled"; - transaction.setName(name); <3> - transaction.setType(Transaction.TYPE_REQUEST); <4> - return request.handle(); - } catch (Exception e) { - transaction.captureException(e); - throw e; - } finally { - transaction.end(); <5> - } -} ----- -<1> Create a transaction as the child of a remote parent -<2> Activate the transaction -<3> Name the transaction -<4> Add a transaction type -<5> Eventually, end the transaction - -// end::java[] - -// *************************************************** -// *************************************************** - -// tag::net[] - -Deserialize the incoming distributed tracing context, and pass it to any of the -{apm-dotnet-ref}/public-api.html#api-start-transaction[`StartTransaction`] or -{apm-dotnet-ref}/public-api.html#convenient-capture-transaction[`CaptureTransaction`] APIs -- -all of which have an optional `DistributedTracingData` parameter. -This will create a new transaction or span as a child of the incoming trace context. - -Example starting a new transaction: - -[source,csharp] ----- -var transaction2 = Agent.Tracer.StartTransaction("Transaction2", "TestTransaction", - DistributedTracingData.TryDeserializeFromString(serializedDistributedTracingData)); ----- - -// end::net[] - -// *************************************************** -// *************************************************** - -// tag::node[] - -1. Decode and store the `traceparent` in the receiving service. - -2. Pass in the `traceparent` as the `childOf` option to manually start a new transaction -as a child of the received `traceparent` with -{apm-node-ref}/agent-api.html#apm-start-transaction[`apm.startTransaction()`]. - -Example receiving a `traceparent` over raw UDP: - -[source,js] ----- -const traceparent = readTraceparentFromUDPPacket() <1> -agent.startTransaction('my-service-b-transaction', { childOf: traceparent }) <2> ----- -<1> Read the `traceparent` from the incoming request. -<2> Use the `traceparent` to initialize a new transaction that is a child of the original `traceparent`. - -// end::node[] - -// *************************************************** -// *************************************************** - -// tag::php[] - -1. Receive the distributed tracing data on the server side. - -2. Begin a new transaction using the agent's public API. For example, use {apm-php-ref-v}/public-api.html#api-elasticapm-class-begin-current-transaction[`ElasticApm::beginCurrentTransaction`] -and pass the received distributed tracing data (serialized as string) as a parameter. -This will create a new transaction as a child of the incoming trace context. - -3. Don't forget to eventually end the transaction on the server side. - -Example: - -[source,php] ----- -$receiverTransaction = ElasticApm::beginCurrentTransaction( <1> - 'GET /data-api', - 'data-layer', - /* timestamp */ null, - $distDataAsString <2> -); ----- -<1> Start a new transaction -<2> Pass in the received distributed tracing data (serialized as string) - -Once this new transaction has been created in the receiving service, -you can create child spans, or use any other agent API methods as you typically would. - -// end::php[] - -// *************************************************** -// *************************************************** - -// tag::python[] - -1. Create a `TraceParent` object from a string or HTTP header. - -2. Start a new transaction as a child of the `TraceParent` by passing in a `TraceParent` object. - -Example using HTTP headers: - -[source,python] ----- -parent = elasticapm.trace_parent_from_headers(headers_dict) <1> -client.begin_transaction('processors', trace_parent=parent) <2> ----- -<1> Create a `TraceParent` object from HTTP headers formed as a dictionary -<2> Begin a new transaction as a child of the received `TraceParent` - -TIP: See the {apm-py-ref}/api.html#traceparent-api[`TraceParent` API] for additional examples. -// end::python[] - -// *************************************************** -// *************************************************** - -// tag::ruby[] - -Start a new transaction or span as a child of the incoming transaction or span with -{apm-ruby-ref}/api.html#api-agent-with_transaction[`with_transaction`] or -{apm-ruby-ref}/api.html#api-agent-with_span[`with_span`]. - -Example: - -[source,ruby] ----- -# env being a Rack env -context = ElasticAPM::TraceContext.parse(env: env) <1> - -ElasticAPM.with_transaction("Do things", trace_context: context) do <2> - ElasticAPM.with_span("Do nested thing", trace_context: context) do <3> - end -end ----- -<1> Parse the incoming `TraceContext` -<2> Create a transaction as a child of the incoming `TraceContext` -<3> Create a span as a child of the newly created transaction. `trace_context` is optional here, -as spans are automatically created as a child of their parent's transaction's `TraceContext` when none is passed. - -// end::ruby[] diff --git a/docs/tab-widgets/distributed-trace-send-widget.asciidoc b/docs/tab-widgets/distributed-trace-send-widget.asciidoc deleted file mode 100644 index 115cf6556ca..00000000000 --- a/docs/tab-widgets/distributed-trace-send-widget.asciidoc +++ /dev/null @@ -1,150 +0,0 @@ -// The Java agent defaults to visible. -// Change with `aria-selected="false"` and `hidden=""` -++++ -
-
- - - - - - - - -
- - -
-++++ - -include::distributed-trace-send.asciidoc[tag=java] - -++++ -
- - - - - -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/distributed-trace-send.asciidoc b/docs/tab-widgets/distributed-trace-send.asciidoc deleted file mode 100644 index b38b179f4f0..00000000000 --- a/docs/tab-widgets/distributed-trace-send.asciidoc +++ /dev/null @@ -1,221 +0,0 @@ -// tag::go[] - -1. Start a transaction with -{apm-go-ref}/api.html#tracer-api-start-transaction[`StartTransaction`] or a span with -{apm-go-ref}/api.html#transaction-start-span[`StartSpan`]. - -2. Get the active `TraceContext`. - -3. Send the `TraceContext` to the receiving service. - -Example: - -[source,go] ----- -transaction := apm.DefaultTracer().StartTransaction("GET /", "request") <1> -traceContext := transaction.TraceContext() <2> - -// Send TraceContext to receiving service -traceparent := apmhttp.FormatTraceparentHeader(traceContext) <3> -tracestate := traceContext.State.String() ----- -<1> Start a transaction -<2> Get `TraceContext` from current Transaction -<3> Format the `TraceContext` or `tracestate` as a `traceparent` header. -// end::go[] - -// *************************************************** -// *************************************************** - -// tag::ios[] - -experimental::[] - -The agent will automatically inject trace headers into network requests using `URLSessions`, but if you're using a non-standard network library you may need to manually inject them. It will be done using the OpenTelemetry APIs: - -1. Create a `Setter` - -2. Create a `Span` per https://github.com/open-telemetry/opentelemetry-swift/blob/main/Examples/Simple%20Exporter/main.swift#L35[Open Telemetry standards] - -3. Inject trace context to header dictionary - -4. Follow the procedure of your network library to complete the network request. Make sure to call `span.end()` when the request succeeds or fails. - -[source,swift] ----- -import OpenTelemetryApi -import OpenTelemetrySdk - -struct BasicSetter: Setter { <1> - func set(carrier: inout [String: String], key: String, value: String) { - carrier[key] = value - } -} - -let span : Span = ... <2> -let setter = BasicSetter() -let propagator = W3CTraceContextPropagator() -var headers = [String:String]() - -propagator.inject(spanContext: span.context, carrier: &headers, setter:setter) <3> - -let request = URLRequest(...) -request.allHTTPHeaderFields = headers -... // make network request -span.end() ----- -// end::ios[] - -// *************************************************** -// *************************************************** - -// tag::java[] - -1. Start a transaction with {apm-java-ref}/public-api.html#api-start-transaction[`startTransaction`], -or a span with {apm-java-ref}/public-api.html#api-span-start-span[`startSpan`]. - -2. Inject the `traceparent` header into the request object with -{apm-java-ref}/public-api.html#api-transaction-inject-trace-headers[`injectTraceHeaders`] - -Example of manually instrumenting an RPC framework: - -[source,java] ----- -// Hook into a callback provided by the RPC framework that is called on outgoing requests -public Response onOutgoingRequest(Request request) throws Exception { - Span span = ElasticApm.currentSpan() <1> - .startSpan("external", "http", null) - .setName(request.getMethod() + " " + request.getHost()); - try (final Scope scope = transaction.activate()) { - span.injectTraceHeaders((name, value) -> request.addHeader(name, value)); <2> - return request.execute(); - } catch (Exception e) { - span.captureException(e); - throw e; - } finally { - span.end(); <3> - } -} ----- -<1> Create a span representing an external call -<2> Inject the `traceparent` header into the request object -<3> End the span - -// end::java[] - -// *************************************************** -// *************************************************** - -// tag::net[] - -1. Serialize the distributed tracing context of the active transaction or span with -{apm-dotnet-ref}/public-api.html#api-current-transaction[`CurrentTransaction`] or -{apm-dotnet-ref}/public-api.html#api-current-span[`CurrentSpan`]. - -2. Send the serialized context the receiving service. - -Example: - -[source,csharp] ----- -string outgoingDistributedTracingData = - (Agent.Tracer.CurrentSpan?.OutgoingDistributedTracingData - ?? Agent.Tracer.CurrentTransaction?.OutgoingDistributedTracingData)?.SerializeToString(); -// Now send `outgoingDistributedTracingData` to the receiving service ----- - -// end::net[] - -// *************************************************** -// *************************************************** - -// tag::node[] - -1. Start a transaction with {apm-node-ref}/agent-api.html#apm-start-transaction[`apm.startTransaction()`], -or a span with {apm-node-ref}/agent-api.html#apm-start-span[`apm.startSpan()`]. - -2. Get the serialized `traceparent` string of the started transaction/span with -{apm-node-ref}/agent-api.html#apm-current-traceparent[`currentTraceparent`]. - -3. Encode the `traceparent` and send it to the receiving service inside your regular request. - -Example using raw UDP to communicate between two services, A and B: - -[source,js] ----- -agent.startTransaction('my-service-a-transaction'); <1> -const traceparent = agent.currentTraceparent; <2> -sendMetadata(`traceparent: ${traceparent}\n`); <3> ----- -<1> Start a transaction -<2> Get the current `traceparent` -<3> Send the `traceparent` as a header to service B. - -// end::node[] - -// *************************************************** -// *************************************************** - -// tag::php[] - -1. On the client side (i.e., the side sending the request) get the current distributed tracing context. - -2. Serialize the current distributed tracing context to a format supported by the request's transport and send it to the server side (i.e., the side receiving the request). - -Example: - -[source,php] ----- -$distDataAsString = ElasticApm::getSerializedCurrentDistributedTracingData(); <1> ----- -<1> Get the current distributed tracing data serialized as string - -// end::php[] - -// *************************************************** -// *************************************************** - -// tag::python[] - -1. Start a transaction with {apm-py-ref}/api.html#client-api-begin-transaction[`begin_transaction()`]. - -2. Get the `trace_parent` of the active transaction. - -3. Send the `trace_parent` to the receiving service. - -Example: - -[source,python] ----- -client.begin_transaction('new-transaction')<1> - -elasticapm.get_trace_parent_header('new-transaction') <2> - -# Send `trace_parent_str` to another service ----- -<1> Start a new transaction -<2> Return the string representation of the current transaction's `TraceParent` object -// end::python[] - -// *************************************************** -// *************************************************** - -// tag::ruby[] - -1. Start a span with {apm-ruby-ref}/api.html#api-agent-with_span[`with_span`]. - -2. Get the active `TraceContext`. - -3. Send the `TraceContext` to the receiving service. - -[source,ruby] ----- -ElasticAPM.with_span "Name" do |span| <1> - header = span.trace_context.traceparent.to_header <2> - # send the TraceContext Header to a receiving service... -end ----- -<1> Start a span -<2> Get the `TraceContext` - -// end::ruby[] diff --git a/docs/tab-widgets/install-agents-widget.asciidoc b/docs/tab-widgets/install-agents-widget.asciidoc deleted file mode 100644 index 51936e799e0..00000000000 --- a/docs/tab-widgets/install-agents-widget.asciidoc +++ /dev/null @@ -1,186 +0,0 @@ -// The Java agent defaults to visible. -// Change with `aria-selected="false"` and `hidden=""` -++++ -
-
- - - - - - - - - - -
- - -
-++++ - -include::install-agents.asciidoc[tag=java] - -++++ -
- - - - - - - -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/install-agents.asciidoc b/docs/tab-widgets/install-agents.asciidoc deleted file mode 100644 index 5c74b7124b2..00000000000 --- a/docs/tab-widgets/install-agents.asciidoc +++ /dev/null @@ -1,578 +0,0 @@ -// tag::go[] -*Install the agent* - -Install the {apm-agent} packages for Go. - -[source,go] ----- -go get go.elastic.co/apm ----- - -*Configure the agent* - -Agents are libraries that run inside of your application process. -APM services are created programmatically based on the executable file name, or the `ELASTIC_APM_SERVICE_NAME` environment variable. - -[source,go] ----- -# Initialize using environment variables: - -# Set the service name. Allowed characters: a-z, A-Z, 0-9, -, _, and space. -# If ELASTIC_APM_SERVICE_NAME is not specified, the executable name will be used. -export ELASTIC_APM_SERVICE_NAME= - -# Set custom APM Server URL. Default: http://localhost:8200. -export ELASTIC_APM_SERVER_URL= - -# Use if APM Server requires a token -export ELASTIC_APM_SECRET_TOKEN= ----- - -*Instrument your application* - -Instrument your Go application by using one of the provided instrumentation modules or by using the tracer API directly. - -[source,go] ----- -import ( - "net/http" - - "go.elastic.co/apm/module/apmhttp" -) - -func main() { - mux := http.NewServeMux() - ... - http.ListenAndServe(":8080", apmhttp.Wrap(mux)) -} ----- - -*Learn more in the agent reference* - -* {apm-go-ref-v}/supported-tech.html[Supported technologies] -* {apm-go-ref-v}/configuration.html[Advanced configuration] -* {apm-go-ref-v}/getting-started.html[Detailed guide to instrumenting Go source code] -// end::go[] - -// *************************************************** -// *************************************************** - -// tag::ios[] - -experimental::[] - -*Add the agent dependency to your project* - -Add the Elastic APM iOS Agent as a -https://developer.apple.com/documentation/swift_packages/adding_package_dependencies_to_your_app[package dependency] -to your Xcode project or your `Package.swift`: - -[source,swift,linenums,highlight=2;10] ----- -Package( - dependencies:[ - .package(name: "iOSAgent", url: "git@github.com:elastic/apm-agent-ios.git", .branch("main")), - ], - targets:[ - .target( - name: "MyApp", - dependencies: [ - .product(name: "iOSAgent", package: "iOSAgent") - ] - ), -]) ----- - -*Initialize the agent* - -If you're using `SwiftUI` to build your app, add the following to `App.swift`: - -[source,swift,linenums,swift,highlight=2;7..12] ----- -import SwiftUI -import iOSAgent - -@main -struct MyApp: App { - init() { - var config = AgentConfiguration() - config.collectorAddress = "127.0.0.1" <1> - config.collectorPort = 8200 <2> - config.collectorTLS = false <3> - config.secretToken = "" <4> - Agent.start(with: config) - } - var body: some Scene { - WindowGroup { - ContentView() - } - } -} ----- -<1> APM Server URL or IP address -<2> APM Server port number -<3> Enable TLS for Open telemetry exporters -<4> Set secret token for APM server connection - -If you're not using `SwiftUI`, you can add the same thing to your `AppDelegate` file: - -`AppDelegate.swift` -[source,swift,linenums,highlight=2;9..14] ----- -import UIKit -import iOSAgent -@main -class AppDelegate: UIResponder, UIApplicationDelegate { - func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { - var config = AgentConfiguration() - config.collectorAddress = "127.0.0.1" <1> - config.collectorPort = 8200 <2> - config.collectorTLS = false <3> - config.secretToken = "" <4> - Agent.start(with: config) - return true - } -} ----- -<1> APM Server URL or IP address -<2> APM Server port number -<3> Enable TLS for Open telemetry exporters -<4> Set secret token for APM server connection - -// end::ios[] - -// *************************************************** -// *************************************************** - -// tag::java[] - -*Download the {apm-agent}* - -Download the agent jar from http://search.maven.org/#search%7Cga%7C1%7Ca%3Aelastic-apm-agent[Maven Central]. -Do not add the agent as a dependency to your application. - -*Start your application with the `javaagent` flag* - -Add the `-javaagent` flag and configure the agent with system properties. - -* Set required service name -* Set custom APM Server URL (default: http://localhost:8200) -* Set the base package of your application - -[source,java] ----- -java -javaagent:/path/to/elastic-apm-agent-.jar \ - -Delastic.apm.service_name=my-application \ - -Delastic.apm.server_urls=http://localhost:8200 \ - -Delastic.apm.secret_token= \ - -Delastic.apm.application_packages=org.example \ - -jar my-application.jar ----- - -*Learn more in the agent reference* - -* {apm-java-ref-v}/supported-technologies-details.html[Supported technologies] -* {apm-java-ref-v}/configuration.html[Advanced configuration] -// end::java[] - -// *************************************************** -// *************************************************** - -// tag::net[] -*Download the {apm-agent}* - -Add the agent packages from https://www.nuget.org/packages?q=Elastic.apm[NuGet] to your .NET application. -There are multiple NuGet packages available for different use cases. - -For an ASP.NET Core application with Entity Framework Core, download the -https://www.nuget.org/packages/Elastic.Apm.NetCoreAll[Elastic.Apm.NetCoreAll] package. -This package will automatically add every agent component to your application. - -To minimize the number of dependencies, you can use the -https://www.nuget.org/packages/Elastic.Apm.AspNetCore[Elastic.Apm.AspNetCore] package for just ASP.NET Core monitoring, or the -https://www.nuget.org/packages/Elastic.Apm.EntityFrameworkCore[Elastic.Apm.EfCore] package for just Entity Framework Core monitoring. - -If you only want to use the public agent API for manual instrumentation, use the -https://www.nuget.org/packages/Elastic.Apm[Elastic.Apm] package. - -*Add the agent to the application* - -For an ASP.NET Core application with the `Elastic.Apm.NetCoreAll` package, -call the `UseAllElasticApm` method in the `Configure` method within the `Startup.cs` file: - -[source,dotnet] ----- -public class Startup -{ - public void Configure(IApplicationBuilder app, IHostingEnvironment env) - { - app.UseAllElasticApm(Configuration); - //…rest of the method - } - //…rest of the class -} ----- - -Passing an `IConfiguration` instance is optional and by doing so, -the agent will read config settings through this `IConfiguration` instance, for example, -from the `appsettings.json` file: - -[source,json] ----- -{ - "ElasticApm": { - "SecretToken": "", - "ServerUrls": "http://localhost:8200", //Set custom APM Server URL (default: http://localhost:8200) - "ServiceName" : "MyApp", //allowed characters: a-z, A-Z, 0-9, -, _, and space. Default is the entry assembly of the application - } -} ----- - -If you don’t pass an `IConfiguration` instance to the agent, for example, in a non-ASP.NET Core application, -you can configure the agent with environment variables. -See the agent reference for more information. - -*Learn more in the agent reference* - -* {apm-dotnet-ref-v}/supported-technologies.html[Supported technologies] -* {apm-dotnet-ref-v}/configuration.html[Advanced configuration] -// end::net[] - -// *************************************************** -// *************************************************** - -// tag::node[] -*Install the {apm-agent}* - -Install the {apm-agent} for Node.js as a dependency to your application. - -[source,js] ----- -npm install elastic-apm-node --save ----- - -*Configure the agent* - -Agents are libraries that run inside of your application process. APM services are created programmatically based on the `serviceName`. -This agent supports a variety of frameworks but can also be used with your custom stack. - -[source,js] ----- -// Add this to the VERY top of the first file loaded in your app -var apm = require('elastic-apm-node').start({ - // Override service name from package.json - // Allowed characters: a-z, A-Z, 0-9, -, _, and space - serviceName: '', - - // Use if APM Server requires a token - secretToken: '', - - // Set custom APM Server URL (default: http://localhost:8200) - serverUrl: '' -}) ----- - -*Learn more in the agent reference* - -* {apm-node-ref-v}/supported-technologies.html[Supported technologies] -* {apm-node-ref-v}/advanced-setup.html[Babel/ES Modules] -* {apm-node-ref-v}/configuring-the-agent.html[Advanced configuration] - -// end::node[] - -// *************************************************** -// *************************************************** - -// tag::php[] - -*Install the agent* - -Install the PHP agent using one of the https://github.com/elastic/apm-agent-php/releases[published packages]. - -To use the RPM Package (RHEL/CentOS and Fedora): - -[source,php] ----- -rpm -ivh .rpm ----- - -To use the DEB package (Debian and Ubuntu): - -[source,php] ----- -dpkg -i .deb ----- - -To use the APK package (Alpine): - -[source,php] ----- -apk add --allow-untrusted .apk ----- - -If you can’t find your distribution, -you can install the agent by {apm-php-ref-v}/setup.html[building it from the source]. - -*Configure the agent* - -Configure your agent inside of the `php.ini` file: - -[source,ini] ----- -elastic_apm.server_url=http://localhost:8200 -elastic_apm.secret_token=SECRET_TOKEN -elastic_apm.service_name="My-service" ----- - -*Learn more in the agent reference* - -* {apm-php-ref-v}/supported-technologies.html[Supported technologies] -* {apm-php-ref-v}/configuration.html[Configuration] - -// end::php[] - -// *************************************************** -// *************************************************** - -// tag::python[] -Django:: -+ -*Install the {apm-agent}* -+ -Install the {apm-agent} for Python as a dependency. -+ -[source,python] ----- -$ pip install elastic-apm ----- -+ -*Configure the agent* -+ -Agents are libraries that run inside of your application process. -APM services are created programmatically based on the `SERVICE_NAME`. -+ -[source,python] ----- -# Add the agent to the installed apps -INSTALLED_APPS = ( - 'elasticapm.contrib.django', - # ... -) - -ELASTIC_APM = { - # Set required service name. Allowed characters: - # a-z, A-Z, 0-9, -, _, and space - 'SERVICE_NAME': '', - - # Use if APM Server requires a token - 'SECRET_TOKEN': '', - - # Set custom APM Server URL (default: http://localhost:8200) - 'SERVER_URL': '', -} - -# To send performance metrics, add our tracing middleware: -MIDDLEWARE = ( - 'elasticapm.contrib.django.middleware.TracingMiddleware', - #... -) ----- - -Flask:: -+ -*Install the {apm-agent}* -+ -Install the {apm-agent} for Python as a dependency. -+ -[source,python] ----- -$ pip install elastic-apm[flask] ----- -+ -*Configure the agent* -+ -Agents are libraries that run inside of your application process. -APM services are created programmatically based on the `SERVICE_NAME`. -+ -[source,python] ----- -# initialize using environment variables -from elasticapm.contrib.flask import ElasticAPM -app = Flask(__name__) -apm = ElasticAPM(app) - -# or configure to use ELASTIC_APM in your application settings -from elasticapm.contrib.flask import ElasticAPM -app.config['ELASTIC_APM'] = { - # Set required service name. Allowed characters: - # a-z, A-Z, 0-9, -, _, and space - 'SERVICE_NAME': '', - - # Use if APM Server requires a token - 'SECRET_TOKEN': '', - - # Set custom APM Server URL (default: http://localhost:8200) - 'SERVER_URL': '', -} - -apm = ElasticAPM(app) ----- - -*Learn more in the agent reference* - -* {apm-py-ref-v}/supported-technologies.html[Supported technologies] -* {apm-py-ref-v}/configuration.html[Advanced configuration] - -// end::python[] - -// *************************************************** -// *************************************************** - -// tag::ruby[] -*Install the {apm-agent}* - -Add the agent to your Gemfile. - -[source,ruby] ----- -gem 'elastic-apm' ----- -*Configure the agent* - -Ruby on Rails:: -+ -APM is automatically started when your app boots. -Configure the agent by creating the config file `config/elastic_apm.yml`: -+ -[source,ruby] ----- -# config/elastic_apm.yml: - -# Set service name - allowed characters: a-z, A-Z, 0-9, -, _ and space -# Defaults to the name of your Rails app -service_name: 'my-service' - -# Use if APM Server requires a token -secret_token: '' - -# Set custom APM Server URL (default: http://localhost:8200) -server_url: 'http://localhost:8200' ----- - -Rack:: -+ -For Rack or a compatible framework, like Sinatra, include the middleware in your app and start the agent. -+ -[source,ruby] ----- -# config.ru - require 'sinatra/base' - - class MySinatraApp < Sinatra::Base - use ElasticAPM::Middleware - - # ... - end - - ElasticAPM.start( - app: MySinatraApp, # required - config_file: '' # optional, defaults to config/elastic_apm.yml - ) - - run MySinatraApp - - at_exit { ElasticAPM.stop } ----- -+ -*Create a config file* -+ -Create a config file config/elastic_apm.yml: -+ -[source,ruby] ----- -# config/elastic_apm.yml: - -# Set service name - allowed characters: a-z, A-Z, 0-9, -, _ and space -# Defaults to the name of your Rack app's class. -service_name: 'my-service' - -# Use if APM Server requires a token -secret_token: '' - -# Set custom APM Server URL (default: http://localhost:8200) -server_url: 'http://localhost:8200' ----- - -*Learn more in the agent reference* - -* {apm-ruby-ref-v}/supported-technologies.html[Supported technologies] -* {apm-ruby-ref-v}/configuration.html[Advanced configuration] - -// end::ruby[] - -// *************************************************** -// *************************************************** - -// tag::rum[] -*Enable Real User Monitoring support in APM Server* - -APM Server disables RUM support by default. -To enable it, set `apm-server.rum.enabled: true` in your APM Server configuration file. - -*Set up the agent* - -Once RUM support enabled, you can set up the RUM agent. -There are two ways to do this: add the agent as a dependency, -or set it up with ` - ----- - -*Learn more in the agent reference* - -* {apm-rum-ref-v}/supported-technologies.html[Supported technologies] -* {apm-rum-ref-v}/configuration.html[Advanced configuration] - -// end::rum[] diff --git a/docs/tab-widgets/jaeger-widget.asciidoc b/docs/tab-widgets/jaeger-widget.asciidoc deleted file mode 100644 index 5902738ca38..00000000000 --- a/docs/tab-widgets/jaeger-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::jaeger.asciidoc[tag=ess] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/jaeger.asciidoc b/docs/tab-widgets/jaeger.asciidoc deleted file mode 100644 index 2e1982b6e2c..00000000000 --- a/docs/tab-widgets/jaeger.asciidoc +++ /dev/null @@ -1,64 +0,0 @@ -// tag::ess[] -. Log into {ess-console}[{ecloud}] and select your deployment. -In {kib}, select **Add data**, then search for and select "Elastic APM". -If the integration is already installed, under the polices tab, select **Actions** > **Edit integration**. -If the integration has not been installed, select **Add Elastic APM**. -Copy the URL. If you're using Agent authorization, copy the Secret token as well. - -. Configure the APM Integration as a collector for your Jaeger agents. -+ -As of this writing, the Jaeger agent binary offers the following CLI flags, -which can be used to enable TLS, output to {ecloud}, and set the APM Integration secret token: -+ -[source,terminal] ----- ---reporter.grpc.tls.enabled=true ---reporter.grpc.host-port= ---agent.tags="elastic-apm-auth=Bearer " ----- - -TIP: For the equivalent environment variables, -change all letters to upper-case and replace punctuation with underscores (`_`). -See the https://www.jaegertracing.io/docs/1.27/cli/[Jaeger CLI flags documentation] for more information. - -// end::ess[] - -// tag::self-managed[] -. Configure the APM Integration as a collector for your Jaeger agents. -In {kib}, select **Add data**, then search for and select "Elastic APM". -If the integration is already installed, under the polices tab, select **Actions** > **Edit integration**. -If the integration has not been installed, select **Add Elastic APM**. -Copy the Host. If you're using Agent authorization, copy the Secret token as well. -+ -As of this writing, the Jaeger agent binary offers the `--reporter.grpc.host-port` CLI flag. -Use this to define the host and port that the APM Integration is listening on: -+ -[source,terminal] ----- ---reporter.grpc.host-port= ----- - -. (Optional) Enable encryption -+ -When TLS is enabled for the APM Integration, Jaeger agents must also enable TLS communication: -+ -[source,terminal] ----- ---reporter.grpc.tls.enabled=true ----- - -. (Optional) Enable token-based authorization -+ -A secret token or API key can be used to ensure only authorized Jaeger agents can send data to the APM Integration. -When enabled, use an agent level tag to authorize Jaeger agent communication with the APM Server: -+ -[source,terminal] ----- ---agent.tags="elastic-apm-auth=Bearer " ----- - -TIP: For the equivalent environment variables, -change all letters to upper-case and replace punctuation with underscores (`_`). -See the https://www.jaegertracing.io/docs/1.27/cli/[Jaeger CLI flags documentation] for more information. - -// end::self-managed[] diff --git a/docs/tab-widgets/no-data-indexed-widget.asciidoc b/docs/tab-widgets/no-data-indexed-widget.asciidoc deleted file mode 100644 index 19b6b228dac..00000000000 --- a/docs/tab-widgets/no-data-indexed-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::no-data-indexed.asciidoc[tag=fleet-managed] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/no-data-indexed.asciidoc b/docs/tab-widgets/no-data-indexed.asciidoc deleted file mode 100644 index 57a841a6483..00000000000 --- a/docs/tab-widgets/no-data-indexed.asciidoc +++ /dev/null @@ -1,63 +0,0 @@ -// tag::fleet-managed[] -**Is {agent} healthy?** - -In {kib} open **{fleet}** and find the host that is running the APM integration; -confirm that its status is **Healthy**. -If it isn't, check the {agent} logs to diagnose potential causes. -See {fleet-guide}/monitor-elastic-agent.html[Monitor {agent}s] to learn more. - -**Is APM Server happy?** - -In {kib}, open **{fleet}** and select the host that is running the APM integration. -Open the **Logs** tab and select the `elastic_agent.apm_server` dataset. -Look for any APM Server errors that could help diagnose the problem. - -**Can the {apm-agent} connect to APM Server** - -To determine if the {apm-agent} can connect to the APM Server, send requests to the instrumented service and look for lines -containing `[request]` in the APM Server logs. - -If no requests are logged, confirm that: - -. SSL isn't <>. -. The host is correct. For example, if you're using Docker, ensure a bind to the right interface (for example, set -`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the {apm-agent} accordingly. - -If you see requests coming through the APM Server but they are not accepted (a response code other than `202`), -see <> to narrow down the possible causes. - -**Instrumentation gaps** - -APM agents provide auto-instrumentation for many popular frameworks and libraries. -If the {apm-agent} is not auto-instrumenting something that you were expecting, data won't be sent to the {stack}. -Reference the relevant {apm-agents-ref}/index.html[{apm-agent} documentation] for details on what is automatically instrumented. -// end::fleet-managed[] - -// tag::binary[] -If no data shows up in {es}, first check that the APM components are properly connected. - -To ensure that APM Server configuration is valid and it can connect to the configured output, {es} by default, -run the following commands: - -["source","sh"] ------------------------------------------------------------- -apm-server test config -apm-server test output ------------------------------------------------------------- - -To see if the agent can connect to the APM Server, send requests to the instrumented service and look for lines -containing `[request]` in the APM Server logs. - -If no requests are logged, it might be that SSL is <> or that the host is wrong. -Particularly, if you are using Docker, ensure to bind to the right interface (for example, set -`apm-server.host = 0.0.0.0:8200` to match any IP) and set the `SERVER_URL` setting in the agent accordingly. - -If you see requests coming through the APM Server but they are not accepted (response code other than `202`), consider -the response code to narrow down the possible causes (see sections below). - -Another reason for data not showing up is that the agent is not auto-instrumenting something you were expecting, check -the {apm-agents-ref}/index.html[agent documentation] for details on what is automatically instrumented. - -APM Server currently relies on {es} to create indices that do not exist. -As a result, {es} must be configured to allow {ref}/docs-index_.html#index-creation[automatic index creation] for APM indices. -// end::binary[] diff --git a/docs/tab-widgets/open-kibana-widget.asciidoc b/docs/tab-widgets/open-kibana-widget.asciidoc deleted file mode 100644 index 1947f97b537..00000000000 --- a/docs/tab-widgets/open-kibana-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::open-kibana.asciidoc[tag=cloud] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/open-kibana.asciidoc b/docs/tab-widgets/open-kibana.asciidoc deleted file mode 100644 index b1665ea5e9e..00000000000 --- a/docs/tab-widgets/open-kibana.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -// tag::cloud[] -. https://cloud.elastic.co/[Log in] to your {ecloud} account. - -. Navigate to the {kib} endpoint in your deployment. -// end::cloud[] - -// tag::self-managed[] -Point your browser to http://localhost:5601[http://localhost:5601], replacing -`localhost` with the name of the {kib} host. -// end::self-managed[] \ No newline at end of file diff --git a/docs/tab-widgets/secret-token-widget.asciidoc b/docs/tab-widgets/secret-token-widget.asciidoc deleted file mode 100644 index aea6373e194..00000000000 --- a/docs/tab-widgets/secret-token-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::secret-token.asciidoc[tag=fleet-managed] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/secret-token.asciidoc b/docs/tab-widgets/secret-token.asciidoc deleted file mode 100644 index a986a73e16f..00000000000 --- a/docs/tab-widgets/secret-token.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -// tag::fleet-managed[] -Create or update a secret token in {fleet}. - -include::../configure/shared/input-apm.asciidoc[tag=fleet-managed-settings] -+ -. Navigate to **Agent authorization** > **Secret token** and set the value of your token. -. Click **Save integration**. The APM Server will restart before the change takes effect. -// end::fleet-managed[] - -// tag::binary[] -Set the secret token in `apm-server.yaml`: - -[source,yaml] ----- -apm-server.auth.secret_token: ----- -// end::binary[] \ No newline at end of file diff --git a/docs/tab-widgets/tls-widget.asciidoc b/docs/tab-widgets/tls-widget.asciidoc deleted file mode 100644 index b20b9b81fa0..00000000000 --- a/docs/tab-widgets/tls-widget.asciidoc +++ /dev/null @@ -1,40 +0,0 @@ -++++ -
-
- - -
-
-++++ - -include::tls.asciidoc[tag=fleet-managed] - -++++ -
- -
-++++ \ No newline at end of file diff --git a/docs/tab-widgets/tls.asciidoc b/docs/tab-widgets/tls.asciidoc deleted file mode 100644 index 11ce2247bfa..00000000000 --- a/docs/tab-widgets/tls.asciidoc +++ /dev/null @@ -1,21 +0,0 @@ -// tag::fleet-managed[] -Enable TLS in the APM integration settings and use the <> to set the path to the server certificate and key. -// end::fleet-managed[] - -// tag::binary[] -The following is a basic APM Server SSL config with secure communication enabled. -This will make APM Server serve HTTPS requests instead of HTTP. - -[source,yaml] ----- -apm-server.ssl.enabled: true -apm-server.ssl.certificate: "/path/to/apm-server.crt" -apm-server.ssl.key: "/path/to/apm-server.key" ----- - -A full list of configuration options is available in <>. - -TIP: If APM agents are authenticating themselves using a certificate that cannot be authenticated through known CAs (e.g. self signed certificates), use the `ssl.certificate_authorities` to set a custom CA. -This will automatically modify the `ssl.client_authentication` configuration to require authentication. - -// end::binary[] \ No newline at end of file diff --git a/docs/tls-comms.asciidoc b/docs/tls-comms.asciidoc deleted file mode 100644 index bb33104d0a3..00000000000 --- a/docs/tls-comms.asciidoc +++ /dev/null @@ -1,67 +0,0 @@ -[[agent-tls]] -=== {apm-agent} TLS communication - -TLS is disabled by default. -When TLS is enabled for APM Server inbound communication, agents will verify the identity -of the APM Server by authenticating its certificate. - -When TLS is enabled, a certificate and corresponding private key are required. -The certificate and private key can either be issued by a trusted certificate authority (CA) -or be <>. - -[float] -[[agent-self-sign]] -=== Use a self-signed certificate - -[float] -[[agent-self-sign-1]] -==== Step 1: Create a self-signed certificate - -The {es} distribution offers the `certutil` tool for the creation of self-signed certificates: - -1. Create a CA: `./bin/elasticsearch-certutil ca --pem`. You'll be prompted to enter the desired -location of the output zip archive containing the certificate and the private key. -2. Extract the contents of the CA archive. -3. Create the self-signed certificate: `./bin/elasticsearch-certutil cert --ca-cert -/ca.crt --ca-key /ca.key --pem --name localhost` -4. Extract the certificate and key from the resulted zip archive. - -[float] -[[agent-self-sign-2]] -==== Step 2: Configure the APM Server - -Enable TLS and configure the APM Server to point to the extracted certificate and key: - -include::{tab-widget-dir}/tls-widget.asciidoc[] - -[float] -[[agent-self-sign-3]] -==== Step 3: Configure APM agents - -When the APM server uses a certificate that is not chained to a publicly-trusted certificate -(e.g. self-signed), additional configuration is required in the {apm-agent}: - -* *Go agent*: certificate pinning through {apm-go-ref}/configuration.html#config-server-cert[`ELASTIC_APM_SERVER_CERT`] -* *Python agent*: certificate pinning through {apm-py-ref}/configuration.html#config-server-cert[`server_cert`] -* *Ruby agent*: certificate pinning through {apm-ruby-ref}/configuration.html#config-ssl-ca-cert[`server_ca_cert`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-server-cert[`ServerCert`] -* *Node.js agent*: custom CA setting through {apm-node-ref}/configuration.html#server-ca-cert-file[`serverCaCertFile`] -* *Java agent*: adding the certificate to the JVM `trustStore`. -See {apm-java-ref}/ssl-configuration.html#ssl-server-authentication[APM Server authentication] for more details. - -We do not recommend disabling {apm-agent} verification of the server's certificate, but it is possible: - -* *Go agent*: {apm-go-ref}/configuration.html#config-verify-server-cert[`ELASTIC_APM_VERIFY_SERVER_CERT`] -* *.NET agent*: {apm-dotnet-ref}/config-reporter.html#config-verify-server-cert[`VerifyServerCert`] -* *Java agent*: {apm-java-ref}/config-reporter.html#config-verify-server-cert[`verify_server_cert`] -* *PHP agent*: {apm-php-ref-v}/configuration-reference.html#config-verify-server-cert[`verify_server_cert`] -* *Python agent*: {apm-py-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Ruby agent*: {apm-ruby-ref}/configuration.html#config-verify-server-cert[`verify_server_cert`] -* *Node.js agent*: {apm-node-ref}/configuration.html#validate-server-cert[`verifyServerCert`] - -[float] -[[agent-client-cert]] -=== Client certificate authentication - -APM Server does not require agents to provide a certificate for authentication, -and there is no dedicated support for SSL/TLS client certificate authentication in Elastic’s backend agents. \ No newline at end of file diff --git a/docs/troubleshoot-apm.asciidoc b/docs/troubleshoot-apm.asciidoc deleted file mode 100644 index 0d895ceb3cb..00000000000 --- a/docs/troubleshoot-apm.asciidoc +++ /dev/null @@ -1,56 +0,0 @@ -[[troubleshoot-apm]] -== Troubleshoot - -This section provides solutions to common questions and problems, -and processing and performance guidance. - -* <> -* <> -* <> -* <> -* <> - -For additional help with other APM components, see the links below. - -[float] -[[troubleshooting-docs]] -=== Troubleshooting documentation - -{agent}, the {apm-app}, and each {apm-agent} has its own troubleshooting guide: - -* {fleet-guide}/troubleshooting-intro.html[*{fleet} and {agent}* troubleshooting] -* {kibana-ref}/troubleshooting.html[*{apm-app}* troubleshooting] -* {apm-dotnet-ref-v}/troubleshooting.html[*.NET agent* troubleshooting] -* {apm-go-ref-v}/troubleshooting.html[*Go agent* troubleshooting] -* {apm-ios-ref-v}/troubleshooting.html[*iOS agent* troubleshooting] -* {apm-java-ref-v}/trouble-shooting.html[*Java agent* troubleshooting] -* {apm-node-ref-v}/troubleshooting.html[*Node.js agent* troubleshooting] -* {apm-php-ref-v}/troubleshooting.html[*PHP agent* troubleshooting] -* {apm-py-ref-v}/troubleshooting.html[*Python agent* troubleshooting] -* {apm-ruby-ref-v}/debugging.html[*Ruby agent* troubleshooting] -* {apm-rum-ref-v}/troubleshooting.html[*RUM agent* troubleshooting] - -[float] -[[elastic-support]] -=== Elastic Support - -We offer a support experience unlike any other. -Our team of professionals 'speak human and code' and love making your day. -https://www.elastic.co/subscriptions[Learn more about subscriptions]. - -[float] -[[discussion-forum]] -=== Discussion forum - -For additional questions and feature requests, -visit our https://discuss.elastic.co/c/apm[discussion forum]. - -include::common-problems.asciidoc[] - -include::apm-server-down.asciidoc[] - -include::apm-response-codes.asciidoc[] - -include::processing-performance.asciidoc[] - -include::{docdir}/debugging.asciidoc[] \ No newline at end of file diff --git a/docs/upgrading-to-8.x.asciidoc b/docs/upgrading-to-8.x.asciidoc deleted file mode 100644 index bde78f8d963..00000000000 --- a/docs/upgrading-to-8.x.asciidoc +++ /dev/null @@ -1,232 +0,0 @@ -[[upgrading-to-8.x]] -=== Upgrade to version {version} - -This guide explains the upgrade process for version {version}. -For a detailed look at what's new, see: - -* {observability-guide}/whats-new.html[What's new in {observability}] -* {kibana-ref}/whats-new.html[What's new in {kib}] -* {ref}/release-highlights.html[{es} release highlights] - -[float] -=== Notable APM changes - -* All index management has been removed from APM Server; -{fleet} is now entirely responsible for setting up index templates, index lifecycle polices, -and index pipelines. -* APM Server now only writes to well-defined data streams; -writing to classic indices is no longer supported. -* APM Server has a new {es} output implementation with defaults that should be sufficient for -most use cases. - -As a result of the above changes, -a number of index management and index tuning configuration variables have been removed. -See the APM <>, <> for full details. - -[float] -=== Find your upgrade guide - -Starting in version 7.14, there are two ways to run Elastic APM. -Determine which method you're using, then use the links below to find the correct upgrading guide. - -* **Standalone**: Users in this mode run and configure the APM Server binary. -This mode has been deprecated and will be removed in a future release. -* **{fleet} and the APM integration**: Users in this mode run and configure {fleet} and the Elastic APM integration. - -**Self-installation (non-{ecloud} users) upgrade guides** - -* <> -* <> - -**{ecloud} upgrade guides** - -* <> -* <> - -// ******************************************************** - -[[upgrade-8.0-self-standalone]] -==== Upgrade a self-installation of APM Server standalone to {version} - -++++ -Self-installation standalone -++++ - -This upgrade guide is for the standalone method of running APM Server. -Only use this guide if both of the following are true: - -* You have a self-installation of the {stack}, i.e. you're not using {ecloud}. -* You're running the APM Server binary, i.e. you haven't switched to the Elastic APM integration. - -[float] -==== Prerequisites - -. Prior to upgrading to version {version}, {es}, {kib}, -and APM Server must be upgraded to version 7.17. -** To upgrade {es} and {kib}, -see the https://www.elastic.co/guide/en/elastic-stack/7.17/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] -** To upgrade APM Server to version 7.17, see -{apm-guide-7x}/upgrading-to-717.html[upgrade to version 7.17]. - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -[float] -==== Upgrade steps - -. **Upgrade the {stack} to version {version}** -+ -The {stack} ({es} and {kib}) must be upgraded before APM Server. -See the {stack-ref}/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] for guidance. - -. **Install the APM integration via the {fleet} UI** -+ -include::{docdir}/getting-started-apm-server.asciidoc[tag=why-apm-integration] -+ --- -include::{docdir}/getting-started-apm-server.asciidoc[tag=install-apm-integration] --- - -. **Install the {version} APM Server release** -+ -See <> to find the command that works with your system. -+ -[WARNING] -==== -If you install version {version} of APM Server before installing the APM integration, you will see error logs similar to the following. You must go back and install the APM integration before data can be ingested into {es}. - -[source,json] ----- -... -{"log.level":"error","@timestamp":"2022-01-19T10:45:34.923+0800","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":62},"message":"precondition 'apm integration installed' failed: error querying Elasticsearch for integration index templates: unexpected HTTP status: 404 Not Found ({\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [traces-apm.sampled] not found\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [traces-apm.sampled] not found\"},\"status\":404}): to remediate, please install the apm integration: https://ela.st/apm-integration-quickstart","service.name":"apm-server","ecs.version":"1.6.0"} -{"log.level":"error","@timestamp":"2022-01-19T10:45:37.461+0800","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":62},"message":"precondition 'apm integration installed' failed: error querying Elasticsearch for integration index templates: unexpected HTTP status: 404 Not Found ({\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [logs-apm.error] not found\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [logs-apm.error] not found\"},\"status\":404}): to remediate, please install the apm integration: https://ela.st/apm-integration-quickstart","service.name":"apm-server","ecs.version":"1.6.0"} -... ----- -==== - -. **Review your configuration file** -+ -Some settings have been removed or changed. You may need to update your `apm-server.yml` configuration -file prior to starting the APM Server. -See <> for help in locating this file, -and <> for a list of all available configuration options. - -. **Start the APM Server** -+ -To start the APM Server, run: -+ -[source,bash] ----- -./apm-server -e ----- -+ -Additional details are available in <>. - -. **(Optional) Upgrade to the APM integration** -+ -Got time for one more upgrade? -See <>. - -// ******************************************************** - -[[upgrade-8.0-self-integration]] -==== Upgrade a self-installation of the APM integration to {version} - -++++ -Self-installation APM integration -++++ - -This upgrade guide is for the Elastic APM integration. -Only use this guide if both of the following are true: - -* You have a self-installation of the {stack}, i.e. you're not using {ecloud}. -* You have already switched to and are running {fleet} and the Elastic APM integration. - -[float] -==== Prerequisites - -. Prior to upgrading to version {version}, {es}, and {kib} -must be upgraded to version 7.17. To upgrade {es} and {kib}, -see the https://www.elastic.co/guide/en/elastic-stack/7.17/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -[float] -==== Upgrade steps - -. Upgrade the {stack} to version {version}. -+ -The {stack} ({es} and {kib}) must be upgraded before {agent}. -See the {stack-ref}/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] for guidance. - -. Upgrade {agent} to version {version}. -As a part of this process, the APM integration will automatically upgrade to version {version}. -+ --- -. In {fleet}, select **Agents**. - -. Under **Agents**, click **Upgrade available** to see a list of agents that you can upgrade. - -. Choose **Upgrade agent** from the **Actions** menu next to the agent you want to upgrade. -The **Upgrade agent** option is grayed out when an upgrade is unavailable, or -the {kib} version is lower than the agent version. --- -+ -For more details, or for bulk upgrade instructions, see -{fleet-guide}/upgrade-elastic-agent.html[Upgrade {agent}] - -// ******************************************************** - -[[upgrade-8.0-cloud-standalone]] -==== Upgrade {ecloud} APM Server standalone to {version} - -++++ -{ecloud} standalone -++++ - -This upgrade guide is for the standalone method of running APM Server. -Only use this guide if both of the following are true: - -* You're using {ecloud}. -* You're using the APM Server binary, i.e. you haven't switched to the Elastic APM integration. - -Follow these steps to upgrade: - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -. Upgrade {ecloud} to {version}, -See https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html[Upgrade versions] for instructions. - -. (Optional) Upgrade to the APM integration. -Got time for one more upgrade? -See <>. - -// ******************************************************** - -[[upgrade-8.0-cloud-integration]] -==== Upgrade {ecloud} with the APM integration to 8.0 - -++++ -{ecloud} APM integration -++++ - -This upgrade guide is for the Elastic APM integration. -Only use this guide if both of the following are true: - -* You're using {ecloud}. -* You have already switched to and are running {fleet} and the Elastic APM integration. - -Follow these steps to upgrade: - -. Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content. - -. Upgrade your {ecloud} instance to {version}. -See https://www.elastic.co/guide/en/cloud/current/ec-upgrade-deployment.html[Upgrade versions] for details. -The APM integration will automatically be upgraded to version {version} as a part of this process. - - -NOTE: {ece} users require additional TLS setup. -See {ece-ref}/ece-manage-apm-settings.html[Add APM user settings] for more information. diff --git a/docs/upgrading-to-integration.asciidoc b/docs/upgrading-to-integration.asciidoc deleted file mode 100644 index 4dd771c3efa..00000000000 --- a/docs/upgrading-to-integration.asciidoc +++ /dev/null @@ -1,216 +0,0 @@ -[[upgrade-to-apm-integration]] -=== Switch to the Elastic APM integration - -The APM integration offers a number of benefits over the standalone method of running APM Server: - -**{fleet}**: - -* A single, unified way to add monitoring for logs, metrics, traces, and other types of data to each host -- install one thing instead of multiple -* Central, unified configuration management -- no need to edit multiple configuration files - -**Data streams**: - -// lint ignore apm- -* Reduced number of fields per index, better space efficiency, and faster queries -* More granular data control -* Errors and metrics data streams are shared with other data sources -- which means better long-term integration with the logs and metrics apps -* Removes template inheritance for {ilm-init} policies and makes use of new {es} index and component templates -* Fixes +resource \'apm-{version}-$type' exists, but it is not an alias+ error - -**APM Integration**: - -* Easier to install APM on edge machines -* Improved source map handling and {apm-agent} configuration management -* Less configuration -* Easier and less error-prone upgrade path -* Zero-downtime configuration changes - -[discrete] -[[apm-arch-upgrade]] -=== APM integration architecture - -Elastic APM consists of four components: *APM agents*, the *Elastic APM integration*, *{es}*, and *{kib}*. -Generally, there are two ways that these four components can work together: - -APM agents on edge machines send data to a centrally hosted APM integration: - -[subs=attributes+] -include::./diagrams/apm-architecture-central.asciidoc[Elastic APM architecture with edge APM integrations] - -Or, APM agents and the APM integration live on edge machines and enroll via a centrally hosted {agent}: - -[subs=attributes+] -include::./diagrams/apm-architecture-edge.asciidoc[Elastic APM architecture with central APM integration] - -NOTE: In order to collect data from RUM and mobile agents, which run in browser and mobile applications, -you must run {agent} centrally. For other applications, such as backend services, -{agent} may be co-located on the edge machine. - -[discrete] -[[apm-integration-upgrade-limitations]] -=== Limitations - -There are some limitations to be aware of: - -* This change cannot be reverted -* Currently, only the {es} output is supported -* APM runs under {agent} which, depending on the installation method, might require root privileges -* An {agent} with the APM integration enabled must be managed by {fleet}. - -[discrete] -=== Make the switch - -Select a guide below to get started. - -* <> -* <> - -// ******************************************************** - -[[apm-integration-upgrade-steps]] -==== Switch a self-installation to the APM integration - -++++ -Switch a self-installation -++++ - -. <> -. <> -. <> -. <> -. <> - -[discrete] -[[apm-integration-upgrade-1]] -==== Upgrade the {stack} - -The {stack} ({es} and {kib}) must be upgraded to version 7.14 or higher. -See the {stack-ref}/upgrading-elastic-stack.html[{stack} Installation and Upgrade Guide] for guidance. - -Review the APM <>, <>, -and {observability} {observability-guide}/whats-new.html[What's new] content for important changes between -your current APM version and this one. - -[discrete] -[[apm-integration-upgrade-2]] -==== Add a {fleet} Server - -{fleet} Server is a component of the {stack} used to centrally manage {agent}s. -The APM integration requires a {fleet} Server to be running and accessible to your hosts. -Add a {fleet} Server by following {fleet-guide}/add-a-fleet-server.html[this guide]. - -TIP: If you're upgrading a self-managed deployment of the {stack}, you'll need to enable -{ref}/configuring-stack-security.html[{es} security] and the -{ref}/security-settings.html[API key service]. - -After adding your {fleet} Server host and generating a service token, the in-product help in {kib} -will provide a command to run to start an {agent} as a {fleet} Server. -Commands may require administrator privileges. - -Verify {fleet} Server is running by navigating to **{fleet}** > **Agents** in {kib}. - -[discrete] -[[apm-integration-upgrade-3]] -==== Install a {fleet}-managed {agent} - -NOTE: It's possible to install the Elastic APM integration on the same {agent} that is running the {fleet} Server integration. For this use case, skip this step. - -The {fleet}-managed {agent} will run the Elastic APM integration on your edge nodes, next to your applications. -To install a {fleet}-managed {agent}, follow {fleet-guide}/install-fleet-managed-elastic-agent.html[this guide]. - -[discrete] -[[apm-integration-upgrade-4]] -==== Add the APM integration - -The APM integration receives performance data from your APM agents, -validates and processes it, and then transforms the data into {es} documents. - -To add the APM integration, see <>. -Only complete the linked step (not the entire quick start guide). -If you're adding the APM integration to a {fleet}-managed {agent}, you can use the default policy. -If you're adding the APM integration to the {fleet-server}, use the policy that the {fleet-server} is running on. - -TIP: You'll configure the APM integration in this step. -See <> for a reference of all available settings. -As long as the APM integration is configured with the same secret token or you have API keys enabled on the same host, -no reconfiguration is required in your APM agents. - -[discrete] -[[apm-integration-upgrade-5]] -==== Stop the APM Server - -Once data from upgraded APM agents is visible in the {apm-app}, -it's safe to stop the APM Server process. - -Congratulations -- you now have the latest and greatest in Elastic APM! - -// ******************************************************** - -[[apm-integration-upgrade-steps-ess]] -==== Switch an {ecloud} cluster to the APM integration - -++++ -Switch an {ecloud} cluster -++++ - -. <> -. <> -. <> -. <> - -[discrete] -[[apm-integration-upgrade-ess-1]] -==== Upgrade the {stack} - -Use the {ecloud} console to upgrade the {stack} to version {version}. -See the {cloud}/ec-upgrade-deployment.html[{ess} upgrade guide] for details. - -[discrete] -[[apm-integration-upgrade-ess-2]] -==== Switch to {agent} - -APM data collection will be interrupted while the migration is in progress. -The process of migrating should only take a few minutes. - -With a Superuser account, complete the following steps: - -. In {kib}, navigate to **{observability}** > **APM** > **Settings** > **Schema**. -+ -image::./images/schema-agent.png[switch to {agent}] - -. Click **Switch to {agent}**. -Make a note of the `apm-server.yml` user settings that are incompatible with {agent}. -Check the confirmation box and click **Switch to {agent}**. -+ -image::./images/agent-settings-migration.png[{agent} settings migration] - -{ecloud} will now create a {fleet} Server instance to contain the new APM integration, -and then will shut down the old APM server instance. -Within minutes your data should begin appearing in the {apm-app} again. - -[discrete] -[[apm-integration-upgrade-ess-3]] -==== Configure the APM integration - -You can now update settings that were removed during the upgrade. -See <> for a reference of all available settings. - -// lint ignore fleet elastic-cloud -In {kib}, navigate to **Management** > **Fleet**. -Select the **Elastic Cloud Agent Policy**. -Next to the **Elastic APM** integration, select **Actions** > **Edit integration**. - -[discrete] -[[apm-integration-upgrade-ess-4]] -==== Scale APM and {fleet} - -Certain {es} output configuration options are not available with the APM integration. -To ensure data is not lost, you can scale APM and {fleet} up and out. -APM's capacity to process events increases with the instance memory size. - -Go to the {ess-console}[{ecloud} console], select your deployment and click **Edit**. -Here you can edit the number and size of each availability zone. - -image::./images/scale-apm.png[scale APM] - -Congratulations -- you now have the latest and greatest in Elastic APM! diff --git a/docs/upgrading.asciidoc b/docs/upgrading.asciidoc deleted file mode 100644 index 613be6c1ae2..00000000000 --- a/docs/upgrading.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[upgrade]] -== Upgrade - -This guide gives general recommendations for upgrading Elastic APM. - -* <> -* <> -* <> -* <> - -include::./agent-server-compatibility.asciidoc[] - -include::./apm-breaking.asciidoc[] - -include::./upgrading-to-8.x.asciidoc[] - -include::./upgrading-to-integration.asciidoc[] diff --git a/docs/version.asciidoc b/docs/version.asciidoc deleted file mode 100644 index ee00b85c6b3..00000000000 --- a/docs/version.asciidoc +++ /dev/null @@ -1,20 +0,0 @@ -// doc-branch can be: master, 8.0, 8.1, etc. -:doc-branch: master -:go-version: 1.21.3 -:python: 3.7 -:docker: 1.12 -:docker-compose: 1.11 - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] - -// Agent link attributes -// Used in conjunction with the stack attributes found here: https://github.com/elastic/docs/tree/7d62a6b66d6e9c96e4dd9a96c3dc7c75ceba0288/shared/versions/stack -:apm-dotnet-ref-v: https://www.elastic.co/guide/en/apm/agent/dotnet/{apm-dotnet-branch} -:apm-go-ref-v: https://www.elastic.co/guide/en/apm/agent/go/{apm-go-branch} -:apm-ios-ref-v: https://www.elastic.co/guide/en/apm/agent/swift/{apm-ios-branch} -:apm-java-ref-v: https://www.elastic.co/guide/en/apm/agent/java/{apm-java-branch} -:apm-node-ref-v: https://www.elastic.co/guide/en/apm/agent/nodejs/{apm-node-branch} -:apm-php-ref-v: https://www.elastic.co/guide/en/apm/agent/php/{apm-php-branch} -:apm-py-ref-v: https://www.elastic.co/guide/en/apm/agent/python/{apm-py-branch} -:apm-ruby-ref-v: https://www.elastic.co/guide/en/apm/agent/ruby/{apm-ruby-branch} -:apm-rum-ref-v: https://www.elastic.co/guide/en/apm/agent/rum-js/{apm-rum-branch}