From 794d8356f120bf4bcd7fcded48bd97fd040f188d Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 14:38:07 -0600 Subject: [PATCH 01/23] add service name to logs --- .../observability-add-logs-service-name.md | 55 ------------------ .../observability/add-logs-service-name.md | 56 ------------------ raw-migrated-files/toc.yml | 2 - .../logs/add-service-name-to-logs.md | 58 ++++++++++++++++++- 4 files changed, 57 insertions(+), 114 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-add-logs-service-name.md delete mode 100644 raw-migrated-files/observability-docs/observability/add-logs-service-name.md diff --git a/raw-migrated-files/docs-content/serverless/observability-add-logs-service-name.md b/raw-migrated-files/docs-content/serverless/observability-add-logs-service-name.md deleted file mode 100644 index e9021d9251..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-add-logs-service-name.md +++ /dev/null @@ -1,55 +0,0 @@ -# Add a service name to logs [observability-add-logs-service-name] - -Adding the `service.name` field to your logs associates them with the services that generate them. You can use this field to view and manage logs for distributed services located on multiple hosts. - -To add a service name to your logs, either: - -* Use the `add_fields` processor through an integration, {{agent}} configuration, or {{filebeat}} configuration. -* Map an existing field from your data stream to the `service.name` field. - - -## Use the add fields processor to add a service name [observability-add-logs-service-name-use-the-add-fields-processor-to-add-a-service-name] - -For log data without a service name, use the [`add_fields` processor](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/add_fields-processor.md) to add the `service.name` field. You can add the processor in an integration’s settings or in the {{agent}} or {{filebeat}} configuration. - -For example, adding the `add_fields` processor to the inputs section of a standalone {{agent}} or {{filebeat}} configuration would add `your_service_name` as the `service.name` field: - -```console -processors: - - add_fields: - target: service - fields: - name: your_service_name -``` - -Adding the `add_fields` processor to an integration’s settings would add `your_service_name` as the `service.name` field: - -:::{image} ../../../images/serverless-add-field-processor.png -:alt: Add the add_fields processor to an integration -:class: screenshot -::: - -For more on defining processors, refer to [define processors](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/agent-processors.md). - - -## Map an existing field to the service name field [observability-add-logs-service-name-map-an-existing-field-to-the-service-name-field] - -For logs that with an existing field being used to represent the service name, map that field to the `service.name` field using the [alias field type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/field-alias.md). Follow these steps to update your mapping: - -1. Go to **Management** → **Index Management** → **Index Templates**. -2. Search for the index template you want to update. -3. From the **Actions** menu for that template, select **edit**. -4. Go to **Mappings**, and select **Add field**. -5. Under **Field type**, select **Alias** and add `service.name` to the **Field name**. -6. Under **Field path**, select the existing field you want to map to the service name. -7. Select **Add field**. - -For more ways to add a field to your mapping, refer to [add a field to an existing mapping](../../../manage-data/data-store/mapping/explicit-mapping.md#add-field-mapping). - - -## Additional ways to process data [observability-add-logs-service-name-additional-ways-to-process-data] - -The {{stack}} provides additional ways to process your data: - -* **https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html[Ingest pipelines]:** convert data to ECS, normalize field data, or enrich incoming data. -* **https://www.elastic.co/guide/en/logstash/current/introduction.html[Logstash]:** enrich your data using input, output, and filter plugins. diff --git a/raw-migrated-files/observability-docs/observability/add-logs-service-name.md b/raw-migrated-files/observability-docs/observability/add-logs-service-name.md deleted file mode 100644 index 7f33a37022..0000000000 --- a/raw-migrated-files/observability-docs/observability/add-logs-service-name.md +++ /dev/null @@ -1,56 +0,0 @@ -# Add a service name to logs [add-logs-service-name] - -Adding the `service.name` field to your logs associates them with the services that generate them. You can use this field to view and manage logs for distributed services located on multiple hosts. - -To add a service name to your logs, either: - -* Use the `add_fields` processor through an integration, {{agent}} configuration, or {{filebeat}} configuration. -* Map an existing field from your data stream to the `service.name` field. - - -## Use the add fields processor to add a service name [use-the-add-fields-processor-to-add-a-service-name] - -For log data without a service name, use the [add_fields processor](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/add_fields-processor.md) to add the `service.name` field. You can add the processor in an integration’s settings or in the {{agent}} or {{filebeat}} configuration. - -For example, adding the `add_fields` processor to the inputs section of a standalone {{agent}} or {{filebeat}} configuration would add `your_service_name` as the `service.name` field: - -```console -processors: - - add_fields: - target: service - fields: - name: your_service_name -``` - -Adding the `add_fields` processor to an integration’s settings would add `your_service_name` as the `service.name` field: - -:::{image} ../../../images/observability-add-field-processor.png -:alt: Add the add_fields processor to an integration -:class: screenshot -::: - -For more on defining processors, refer to [define processors](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/agent-processors.md). - - -## Map an existing field to the service name field [map-an-existing-field-to-the-service-name-field] - -For logs that with an existing field being used to represent the service name, map that field to the `service.name` field using the [alias field type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/field-alias.md). Follow these steps to update your mapping: - -1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Select **Index Templates**. -3. Search for the index template you want to update. -4. From the **Actions** menu for that template, select **Edit**. -5. Go to **Mappings**, and select **Add field**. -6. Under **Field type**, select **Alias** and add `service.name` to the **Field name**. -7. Under **Field path**, select the existing field you want to map to the service name. -8. Select **Add field**. - -For more ways to add a field to your mapping, refer to [add a field to an existing mapping](../../../manage-data/data-store/mapping/explicit-mapping.md#add-field-mapping). - - -## Additional ways to process data [additional-ways-to-process-data] - -The {{stack}} provides additional ways to process your data: - -* **https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html[Ingest pipelines]:** convert data to ECS, normalize field data, or enrich incoming data. -* **https://www.elastic.co/guide/en/logstash/current/introduction.html[Logstash]:** enrich your data using input, output, and filter plugins. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 4eb07d4270..1ab78935ca 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -218,7 +218,6 @@ toc: - file: docs-content/serverless/ingest-third-party-cloud-security-data.md - file: docs-content/serverless/ingest-wiz-data.md - file: docs-content/serverless/intro.md - - file: docs-content/serverless/observability-add-logs-service-name.md - file: docs-content/serverless/observability-ai-assistant.md - file: docs-content/serverless/observability-apm-act-on-data.md - file: docs-content/serverless/observability-apm-agents-elastic-apm-agents.md @@ -458,7 +457,6 @@ toc: - file: logstash/logstash/ts-logstash.md - file: observability-docs/observability/index.md children: - - file: observability-docs/observability/add-logs-service-name.md - file: observability-docs/observability/apm-act-on-data.md - file: observability-docs/observability/apm-agents.md - file: observability-docs/observability/apm-getting-started-apm-server.md diff --git a/solutions/observability/logs/add-service-name-to-logs.md b/solutions/observability/logs/add-service-name-to-logs.md index 29e7cf8cab..3e78fdf6b3 100644 --- a/solutions/observability/logs/add-service-name-to-logs.md +++ b/solutions/observability/logs/add-service-name-to-logs.md @@ -4,7 +4,63 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-add-logs-service-name.html --- -# Add a service name to logs +# Add a service name to logs [observability-add-logs-service-name] + +Adding the `service.name` field to your logs associates them with the services that generate them. You can use this field to view and manage logs for distributed services located on multiple hosts. + +To add a service name to your logs, either: + +* Use the `add_fields` processor through an integration, {{agent}} configuration, or {{filebeat}} configuration. +* Map an existing field from your data stream to the `service.name` field. + + +## Use the add fields processor to add a service name [observability-add-logs-service-name-use-the-add-fields-processor-to-add-a-service-name] + +For log data without a service name, use the [`add_fields` processor](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/add_fields-processor.md) to add the `service.name` field. You can add the processor in an integration’s settings or in the {{agent}} or {{filebeat}} configuration. + +For example, adding the `add_fields` processor to the inputs section of a standalone {{agent}} or {{filebeat}} configuration would add `your_service_name` as the `service.name` field: + +```console +processors: + - add_fields: + target: service + fields: + name: your_service_name +``` + +Adding the `add_fields` processor to an integration’s settings would add `your_service_name` as the `service.name` field: + +:::{image} ../../../images/serverless-add-field-processor.png +:alt: Add the add_fields processor to an integration +:class: screenshot +::: + +For more on defining processors, refer to [define processors](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/agent-processors.md). + + +## Map an existing field to the service name field [observability-add-logs-service-name-map-an-existing-field-to-the-service-name-field] + +For logs that with an existing field being used to represent the service name, map that field to the `service.name` field using the [alias field type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/field-alias.md). Follow these steps to update your mapping: + +1. find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +2. Select **Index Templates**. +3. Search for the index template you want to update. +4. From the **Actions** menu for that template, select **edit**. +5. Go to **Mappings**, and select **Add field**. +6. Under **Field type**, select **Alias** and add `service.name` to the **Field name**. +7. Under **Field path**, select the existing field you want to map to the service name. +8. Select **Add field**. + +For more ways to add a field to your mapping, refer to [add a field to an existing mapping](../../../manage-data/data-store/mapping/explicit-mapping.md#add-field-mapping). + + +## Additional ways to process data [observability-add-logs-service-name-additional-ways-to-process-data] + +The {{stack}} provides additional ways to process your data: + +* **https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html[Ingest pipelines]:** convert data to ECS, normalize field data, or enrich incoming data. +* **https://www.elastic.co/guide/en/logstash/current/introduction.html[Logstash]:** enrich your data using input, output, and filter plugins. + % What needs to be done: Align serverless/stateful From e15c6199947141462fd756ca5fa36b5ac15a8f23 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 14:42:21 -0600 Subject: [PATCH 02/23] add apm log sending --- .../observability-send-application-logs.md | 29 ----------------- .../observability/logs-send-application.md | 25 --------------- raw-migrated-files/toc.yml | 2 -- .../logs/apm-agent-log-sending.md | 31 ++++++++++++++++--- 4 files changed, 26 insertions(+), 61 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-send-application-logs.md delete mode 100644 raw-migrated-files/observability-docs/observability/logs-send-application.md diff --git a/raw-migrated-files/docs-content/serverless/observability-send-application-logs.md b/raw-migrated-files/docs-content/serverless/observability-send-application-logs.md deleted file mode 100644 index 325501bab3..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-send-application-logs.md +++ /dev/null @@ -1,29 +0,0 @@ -# {{apm-agent}} log sending [observability-send-application-logs] - -Elastic APM agents can automatically capture and send logs directly to the managed intake service — enabling you to easily ingest log events without needing a separate log shipper like {{filebeat}} or {{agent}}. - -**Supported APM agents/languages** - -* Java - -**Requirements** - -The Elastic APM agent for Java. - -**Pros** - -* Simple to set up as it only relies on the APM agent. -* No modification of the application required. -* No need to deploy {{filebeat}}. -* No need to store log files in the file system. - -**Cons** - -* Experimental feature. -* Limited APM agent support. -* Not resilient to outages. Log messages can be dropped when buffered in the agent or in the managed intake service. - - -## Get started [observability-send-application-logs-get-started] - -See the [Java agent](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-sending) documentation to get started. diff --git a/raw-migrated-files/observability-docs/observability/logs-send-application.md b/raw-migrated-files/observability-docs/observability/logs-send-application.md deleted file mode 100644 index 0e28c67a56..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-send-application.md +++ /dev/null @@ -1,25 +0,0 @@ -# {{apm-agent}} log sending [logs-send-application] - -The Java APM agent can automatically capture and send logs directly to the managed intake service — enabling you to easily ingest log events without needing a separate log shipper like {{filebeat}}. - -**Requirements** - -The Elastic APM agent for Java. - -**Pros** - -* Simple to set up as it only relies on the APM agent. -* No modification of the application required. -* No need to deploy {{filebeat}}. -* No need to store log files in the file system. - -**Cons** - -* Experimental feature. -* Limited APM agent support. -* Not resilient to outages. Log messages can be dropped when buffered in the agent or in the managed intake service. - - -## Get started [get-started] - -See the [Java agent](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-sending) documentation to get started. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 1ab78935ca..6c59044bb8 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -232,7 +232,6 @@ toc: - file: docs-content/serverless/observability-monitor-datasets.md - file: docs-content/serverless/observability-parse-log-data.md - file: docs-content/serverless/observability-plaintext-application-logs.md - - file: docs-content/serverless/observability-send-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md - file: docs-content/serverless/project-and-management-settings.md - file: docs-content/serverless/project-setting-data.md @@ -472,7 +471,6 @@ toc: - file: observability-docs/observability/logs-filter-and-aggregate.md - file: observability-docs/observability/logs-parse.md - file: observability-docs/observability/logs-plaintext.md - - file: observability-docs/observability/logs-send-application.md - file: observability-docs/observability/logs-stream.md - file: observability-docs/observability/monitor-datasets.md - file: observability-docs/observability/obs-ai-assistant.md diff --git a/solutions/observability/logs/apm-agent-log-sending.md b/solutions/observability/logs/apm-agent-log-sending.md index 393dbeb789..62adefbfbb 100644 --- a/solutions/observability/logs/apm-agent-log-sending.md +++ b/solutions/observability/logs/apm-agent-log-sending.md @@ -4,11 +4,32 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-send-application-logs.html --- -# APM agent log sending +# APM agent log sending [observability-send-application-logs] -% What needs to be done: Align serverless/stateful +Elastic APM agents can automatically capture and send logs directly to the managed intake service — enabling you to easily ingest log events without needing a separate log shipper like {{filebeat}} or {{agent}}. -% Use migrated content from existing pages that map to this page: +**Supported APM agents/languages** -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-send-application.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-send-application-logs.md \ No newline at end of file +* Java + +**Requirements** + +The Elastic APM agent for Java. + +**Pros** + +* Simple to set up as it only relies on the APM agent. +* No modification of the application required. +* No need to deploy {{filebeat}}. +* No need to store log files in the file system. + +**Cons** + +* Experimental feature. +* Limited APM agent support. +* Not resilient to outages. Log messages can be dropped when buffered in the agent or in the managed intake service. + + +## Get started [observability-send-application-logs-get-started] + +See the [Java agent](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-sending) documentation to get started. \ No newline at end of file From 9336939437a14fff5aac1e7a79b8e77cfae39550 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 14:47:45 -0600 Subject: [PATCH 03/23] add ecs app logs --- .../observability/logs-ecs-application.md | 308 ----------------- raw-migrated-files/toc.yml | 1 - .../logs/ecs-formatted-application-logs.md | 311 +++++++++++++++++- 3 files changed, 309 insertions(+), 311 deletions(-) delete mode 100644 raw-migrated-files/observability-docs/observability/logs-ecs-application.md diff --git a/raw-migrated-files/observability-docs/observability/logs-ecs-application.md b/raw-migrated-files/observability-docs/observability/logs-ecs-application.md deleted file mode 100644 index 4089d74e29..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-ecs-application.md +++ /dev/null @@ -1,308 +0,0 @@ -# ECS formatted application logs [logs-ecs-application] - -Logs formatted in Elastic Common Schema (ECS) don’t require manual parsing, and the configuration can be reused across applications. ECS-formatted logs, when paired with an {{apm-agent}}, allow you to correlate logs to easily view logs that belong to a particular trace. - -You can format your logs in ECS format the following ways: - -* [ECS loggers](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ecs-loggers): plugins for your logging libraries that reformat your logs into ECS format. -* [APM agent ECS reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#apm-agent-ecs-reformatting): Java, Ruby, and Python {{apm-agent}}s automatically reformat application logs to ECS format without a logger. - - -## ECS loggers [ecs-loggers] - -ECS loggers reformat your application logs into ECS-compatible JSON, removing the need for manual parsing. ECS loggers require {{filebeat}} or {{agent}} configured to monitor and capture application logs. In addition, pairing ECS loggers with your framework’s {{apm-agent}} allows you to correlate logs to easily view logs that belong to a particular trace. - - -### Get started with ECS loggers [get-started-ecs-logging] - -For more information on adding an ECS logger to your application, refer to the guide for your framework: - -* [.NET](asciidocalypse://docs/ecs-dotnet/docs/reference/ecs/ecs-logging-dotnet/setup.md) -* Go: [zap](asciidocalypse://docs/ecs-logging-go-zap/docs/reference/ecs/ecs-logging-go-zap/setup.md) -* [Java](asciidocalypse://docs/ecs-logging-java/docs/reference/ecs/ecs-logging-java/setup.md) -* Node.js: [morgan](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/ecs/ecs-logging-nodejs/winston.md) -* [PHP](asciidocalypse://docs/ecs-logging-php/docs/reference/ecs/ecs-logging-php/setup.md) -* [Python](asciidocalypse://docs/ecs-logging-python/docs/reference/ecs/ecs-logging-python/installation.md) -* [Ruby](asciidocalypse://docs/ecs-logging-ruby/docs/reference/ecs/ecs-logging-ruby/setup.md) - - -## APM agent ECS reformatting [apm-agent-ecs-reformatting] - -Java, Ruby, and Python {{apm-agent}}s can automatically reformat application logs to ECS format without an ECS logger or the need to modify your application. The {{apm-agent}} also allows for log correlation so you can easily view logs that belong to a particular trace. - -To set up log ECS reformatting: - -1. [Enable {{apm-agent}} reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#enable-log-ecs-reformatting) -2. [Ingest logs with {{filebeat}} or {{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs) -3. [View logs in Logs Explorer](../../../solutions/observability/logs/ecs-formatted-application-logs.md#view-ecs-logs) - - -### Enable log ECS reformatting [enable-log-ecs-reformatting] - -Log ECS reformatting is controlled by the `log_ecs_reformatting` configuration option, and is disabled by default. Refer to the guide for your framework for information on enabling: - -* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/config-logging.md#config-log-ecs-reformatting) -* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/configuration.md#config-log-ecs-formatting) -* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/configuration.md#config-log_ecs_reformatting) - - -### Ingest logs [ingest-ecs-logs] - -After enabling log ECS reformatting, send your application logs to your project using one of the following shipping tools: - -* [{{filebeat}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-filebeat): A lightweight data shipper that sends log data to your project. -* [{{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-agent): A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage {{agent}} policies and lifecycles directly from your project. - - -#### Ingest logs with {{filebeat}} [ingest-ecs-logs-with-filebeat] - -Follow these steps to ingest application logs with {{filebeat}}. - - -#### Step 1: Install {{filebeat}} [step-1-ecs-install-filebeat] - -Install {{filebeat}} on the server you want to monitor by running the commands that align with your system: - -:::::::{tab-set} - -::::::{tab-item} DEB -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-darwin-x86_64.tar.gz -tar xzvf filebeat-9.0.0-beta1-darwin-x86_64.tar.gz -``` -:::::: - -::::::{tab-item} RPM -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz -tar xzvf filebeat-9.0.0-beta1-linux-x86_64.tar.gz -``` -:::::: - -::::::{tab-item} macOS -1. Download the {{filebeat}} Windows zip file: https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip[https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip] -2. Extract the contents of the zip file into `C:\Program Files`. -3. Rename the `filebeat-{{version}}-windows-x86_64` directory to `{{filebeat}}`. -4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). -5. From the PowerShell prompt, run the following commands to install {{filebeat}} as a Windows service: - - ```powershell - PS > cd 'C:\Program Files\{filebeat}' - PS C:\Program Files\{filebeat}> .\install-service-filebeat.ps1 - ``` - - -If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`. -:::::: - -::::::{tab-item} Linux -```sh -curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-amd64.deb -sudo dpkg -i filebeat-9.0.0-beta1-amd64.deb -``` -:::::: - -::::::{tab-item} Windows -```sh -curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-x86_64.rpm -sudo rpm -vi filebeat-9.0.0-beta1-x86_64.rpm -``` -:::::: - -::::::: - -#### Step 2: Connect to your project [step-2-ecs-connect-to-your-project] - -Connect to your project using an API key to set up {{filebeat}}. Set the following information in the `filebeat.yml` file: - -```yaml -output.elasticsearch: - hosts: ["your-projects-elasticsearch-endpoint"] - api_key: "id:api_key" -``` - -1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu (![help icon](../../../images/observability-help-icon.png "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. -2. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using: - - ```console - POST /_security/api_key - { - "name": "filebeat_host001", - "role_descriptors": { - "filebeat_writer": { - "cluster": ["manage"], - "index": [ - { - "names": ["filebeat-*"], - "privileges": ["manage"] - } - ] - } - } - } - ``` - - Refer to [Grant access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md) for more information. - - - -#### Step 3: Configure {{filebeat}} [step-3-ecs-configure-filebeat] - -Add the following configuration to your `filebeat.yaml` file to start collecting log data. - -```yaml -filebeat.inputs: -- type: filestream <1> - enabled: true - paths: /path/to/logs.log <2> -``` - -1. Reads lines from an active log file. -2. Paths that you want {{filebeat}} to crawl and fetch logs from. - - - -#### Step 4: Set up and start {{filebeat}} [step-4-ecs-set-up-and-start-filebeat] - -From the {{filebeat}} installation directory, set the [index template](../../../manage-data/data-store/templates.md) by running the command that aligns with your system: - -:::::::{tab-set} - -::::::{tab-item} DEB -```sh -./filebeat setup -e -``` -:::::: - -::::::{tab-item} RPM -```sh -./filebeat setup -e -``` -:::::: - -::::::{tab-item} MacOS -```sh -PS > .\filebeat.exe setup -e -``` -:::::: - -::::::{tab-item} Linux -```sh -filebeat setup -e -``` -:::::: - -::::::{tab-item} Windows -```sh -filebeat setup -e -``` -:::::: - -::::::: -From the {{filebeat}} installation directory, start filebeat by running the command that aligns with your system: - -:::::::{tab-set} - -::::::{tab-item} DEB -```sh -sudo service filebeat start -``` - -::::{note} -If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. -:::: - - -Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). -:::::: - -::::::{tab-item} RPM -```sh -sudo service filebeat start -``` - -::::{note} -If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. -:::: - - -Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). -:::::: - -::::::{tab-item} MacOS -```sh -./filebeat -e -``` -:::::: - -::::::{tab-item} Linux -```sh -./filebeat -e -``` -:::::: - -::::::{tab-item} Windows -```sh -PS C:\Program Files\filebeat> Start-Service filebeat -``` - -By default, Windows log files are stored in `C:\ProgramData\filebeat\Logs`. -:::::: - -::::::: - -#### Ingest logs with {{agent}} [ingest-ecs-logs-with-agent] - -Add the custom logs integration to ingest and centrally manage your logs using {{agent}} and {{fleet}}: - - -#### Add the custom logs integration to your project [step-1-add-the-custom-logs-integration-to-your-project-ecs] - -To add the custom logs integration to your project: - -1. Find **Integrations** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Type `custom` in the search bar and select **Custom Logs**. -3. Click **Install {{agent}}** at the bottom of the page, and follow the instructions for your system to install the {{agent}}. -4. After installing the {{agent}}, click **Save and continue** to configure the integration from the **Add Custom Logs integration** page. -5. Give your integration a meaningful name and description. -6. Add the **Log file path**. For example, `/var/log/your-logs.log`. -7. Click **Advanced options**. -8. In the **Processors** text box, add the following YAML configuration to add processors that enhance your data. Refer to [processors](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filtering-enhancing-data.md) to learn more. - - ```yaml - processors: - - add_host_metadata: \~ - - add_cloud_metadata: \~ - - add_docker_metadata: \~ - - add_kubernetes_metadata: \~ - ``` - -9. Under **Custom configurations**, add the following YAML configuration to collect data. - - ```yaml - json: - overwrite_keys: true <1> - add_error_key: true <2> - expand_keys: true <3> - keys_under_root: true <4> - fields_under_root: true <5> - fields: - service.name: your_service_name <6> - service.version: your_service_version <6> - service.environment: your_service_environment <6> - ``` - - 1. Values from the decoded JSON object overwrite the fields that {{agent}} normally adds (type, source, offset, etc.) in case of conflicts. - 2. {{agent}} adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. - 3. {{agent}} will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. - 4. By default, the decoded JSON is placed under a "json" key in the output document. When set to `true`, the keys are copied top level in the output document. - 5. When set to `true`, custom fields are stored as top-level fields in the output document instead of being grouped under a fields sub-dictionary. - 6. The `service.name` (required), `service.version` (optional), and `service.environment` (optional) of the service you’re collecting logs from, used for log correlation. - -10. Give your agent policy a name. The agent policy defines the data your {{agent}} collects. -11. Save your integration to add it to your deployment. - - -## View logs [view-ecs-logs] - -Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 6c59044bb8..c40e207ce5 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -467,7 +467,6 @@ toc: - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - - file: observability-docs/observability/logs-ecs-application.md - file: observability-docs/observability/logs-filter-and-aggregate.md - file: observability-docs/observability/logs-parse.md - file: observability-docs/observability/logs-plaintext.md diff --git a/solutions/observability/logs/ecs-formatted-application-logs.md b/solutions/observability/logs/ecs-formatted-application-logs.md index d8bd2a694f..04be608599 100644 --- a/solutions/observability/logs/ecs-formatted-application-logs.md +++ b/solutions/observability/logs/ecs-formatted-application-logs.md @@ -4,13 +4,320 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-ecs-application-logs.html --- -# ECS formatted application logs +# ECS formatted application logs [logs-ecs-application] + +Logs formatted in Elastic Common Schema (ECS) don’t require manual parsing, and the configuration can be reused across applications. ECS-formatted logs, when paired with an {{apm-agent}}, allow you to correlate logs to easily view logs that belong to a particular trace. + +You can format your logs in ECS format the following ways: + +* [ECS loggers](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ecs-loggers): plugins for your logging libraries that reformat your logs into ECS format. +* [APM agent ECS reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#apm-agent-ecs-reformatting): Java, Ruby, and Python {{apm-agent}}s automatically reformat application logs to ECS format without a logger. + + +## ECS loggers [ecs-loggers] + +ECS loggers reformat your application logs into ECS-compatible JSON, removing the need for manual parsing. ECS loggers require {{filebeat}} or {{agent}} configured to monitor and capture application logs. In addition, pairing ECS loggers with your framework’s {{apm-agent}} allows you to correlate logs to easily view logs that belong to a particular trace. + + +### Get started with ECS loggers [get-started-ecs-logging] + +For more information on adding an ECS logger to your application, refer to the guide for your framework: + +* [.NET](asciidocalypse://docs/ecs-dotnet/docs/reference/ecs/ecs-logging-dotnet/setup.md) +* Go: [zap](asciidocalypse://docs/ecs-logging-go-zap/docs/reference/ecs/ecs-logging-go-zap/setup.md) +* [Java](asciidocalypse://docs/ecs-logging-java/docs/reference/ecs/ecs-logging-java/setup.md) +* Node.js: [morgan](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/ecs/ecs-logging-nodejs/winston.md) +* [PHP](asciidocalypse://docs/ecs-logging-php/docs/reference/ecs/ecs-logging-php/setup.md) +* [Python](asciidocalypse://docs/ecs-logging-python/docs/reference/ecs/ecs-logging-python/installation.md) +* [Ruby](asciidocalypse://docs/ecs-logging-ruby/docs/reference/ecs/ecs-logging-ruby/setup.md) + + +## APM agent ECS reformatting [apm-agent-ecs-reformatting] + +Java, Ruby, and Python {{apm-agent}}s can automatically reformat application logs to ECS format without an ECS logger or the need to modify your application. The {{apm-agent}} also allows for log correlation so you can easily view logs that belong to a particular trace. + +To set up log ECS reformatting: + +1. [Enable {{apm-agent}} reformatting](../../../solutions/observability/logs/ecs-formatted-application-logs.md#enable-log-ecs-reformatting) +2. [Ingest logs with {{filebeat}} or {{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs) +3. [View logs in Logs Explorer](../../../solutions/observability/logs/ecs-formatted-application-logs.md#view-ecs-logs) + + +### Enable log ECS reformatting [enable-log-ecs-reformatting] + +Log ECS reformatting is controlled by the `log_ecs_reformatting` configuration option, and is disabled by default. Refer to the guide for your framework for information on enabling: + +* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/config-logging.md#config-log-ecs-reformatting) +* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/configuration.md#config-log-ecs-formatting) +* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/configuration.md#config-log_ecs_reformatting) + + +### Ingest logs [ingest-ecs-logs] + +After enabling log ECS reformatting, send your application logs to your project using one of the following shipping tools: + +* [{{filebeat}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-filebeat): A lightweight data shipper that sends log data to your project. +* [{{agent}}](../../../solutions/observability/logs/ecs-formatted-application-logs.md#ingest-ecs-logs-with-agent): A single agent for logs, metrics, security data, and threat prevention. With Fleet, you can centrally manage {{agent}} policies and lifecycles directly from your project. + + +#### Ingest logs with {{filebeat}} [ingest-ecs-logs-with-filebeat] + +Follow these steps to ingest application logs with {{filebeat}}. + + +#### Step 1: Install {{filebeat}} [step-1-ecs-install-filebeat] + +Install {{filebeat}} on the server you want to monitor by running the commands that align with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-darwin-x86_64.tar.gz +tar xzvf filebeat-9.0.0-beta1-darwin-x86_64.tar.gz +``` +:::::: + +::::::{tab-item} RPM +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz +tar xzvf filebeat-9.0.0-beta1-linux-x86_64.tar.gz +``` +:::::: + +::::::{tab-item} macOS +1. Download the {{filebeat}} Windows zip file: https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip[https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip] +2. Extract the contents of the zip file into `C:\Program Files`. +3. Rename the `filebeat-{{version}}-windows-x86_64` directory to `{{filebeat}}`. +4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). +5. From the PowerShell prompt, run the following commands to install {{filebeat}} as a Windows service: + + ```powershell + PS > cd 'C:\Program Files\{filebeat}' + PS C:\Program Files\{filebeat}> .\install-service-filebeat.ps1 + ``` + + +If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`. +:::::: + +::::::{tab-item} Linux +```sh +curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-amd64.deb +sudo dpkg -i filebeat-9.0.0-beta1-amd64.deb +``` +:::::: + +::::::{tab-item} Windows +```sh +curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-x86_64.rpm +sudo rpm -vi filebeat-9.0.0-beta1-x86_64.rpm +``` +:::::: + +::::::: + +#### Step 2: Connect to your project [step-2-ecs-connect-to-your-project] + +Connect to your project using an API key to set up {{filebeat}}. Set the following information in the `filebeat.yml` file: + +```yaml +output.elasticsearch: + hosts: ["your-projects-elasticsearch-endpoint"] + api_key: "id:api_key" +``` + +1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu (![help icon](../../../images/observability-help-icon.png "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. +2. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using: + + ```console + POST /_security/api_key + { + "name": "filebeat_host001", + "role_descriptors": { + "filebeat_writer": { + "cluster": ["manage"], + "index": [ + { + "names": ["filebeat-*"], + "privileges": ["manage"] + } + ] + } + } + } + ``` + + Refer to [Grant access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md) for more information. + + + +#### Step 3: Configure {{filebeat}} [step-3-ecs-configure-filebeat] + +Add the following configuration to your `filebeat.yaml` file to start collecting log data. + +```yaml +filebeat.inputs: +- type: filestream <1> + enabled: true + paths: /path/to/logs.log <2> +``` + +1. Reads lines from an active log file. +2. Paths that you want {{filebeat}} to crawl and fetch logs from. + + + +#### Step 4: Set up and start {{filebeat}} [step-4-ecs-set-up-and-start-filebeat] + +From the {{filebeat}} installation directory, set the [index template](../../../manage-data/data-store/templates.md) by running the command that aligns with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +./filebeat setup -e +``` +:::::: + +::::::{tab-item} RPM +```sh +./filebeat setup -e +``` +:::::: + +::::::{tab-item} MacOS +```sh +PS > .\filebeat.exe setup -e +``` +:::::: + +::::::{tab-item} Linux +```sh +filebeat setup -e +``` +:::::: + +::::::{tab-item} Windows +```sh +filebeat setup -e +``` +:::::: + +::::::: +From the {{filebeat}} installation directory, start filebeat by running the command that aligns with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +sudo service filebeat start +``` + +::::{note} +If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. +:::: + + +Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). +:::::: + +::::::{tab-item} RPM +```sh +sudo service filebeat start +``` + +::::{note} +If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. +:::: + + +Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). +:::::: + +::::::{tab-item} MacOS +```sh +./filebeat -e +``` +:::::: + +::::::{tab-item} Linux +```sh +./filebeat -e +``` +:::::: + +::::::{tab-item} Windows +```sh +PS C:\Program Files\filebeat> Start-Service filebeat +``` + +By default, Windows log files are stored in `C:\ProgramData\filebeat\Logs`. +:::::: + +::::::: + +#### Ingest logs with {{agent}} [ingest-ecs-logs-with-agent] + +Add the custom logs integration to ingest and centrally manage your logs using {{agent}} and {{fleet}}: + + +#### Add the custom logs integration to your project [step-1-add-the-custom-logs-integration-to-your-project-ecs] + +To add the custom logs integration to your project: + +1. Find **Integrations** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +2. Type `custom` in the search bar and select **Custom Logs**. +3. Click **Install {{agent}}** at the bottom of the page, and follow the instructions for your system to install the {{agent}}. +4. After installing the {{agent}}, click **Save and continue** to configure the integration from the **Add Custom Logs integration** page. +5. Give your integration a meaningful name and description. +6. Add the **Log file path**. For example, `/var/log/your-logs.log`. +7. Click **Advanced options**. +8. In the **Processors** text box, add the following YAML configuration to add processors that enhance your data. Refer to [processors](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filtering-enhancing-data.md) to learn more. + + ```yaml + processors: + - add_host_metadata: \~ + - add_cloud_metadata: \~ + - add_docker_metadata: \~ + - add_kubernetes_metadata: \~ + ``` + +9. Under **Custom configurations**, add the following YAML configuration to collect data. + + ```yaml + json: + overwrite_keys: true <1> + add_error_key: true <2> + expand_keys: true <3> + keys_under_root: true <4> + fields_under_root: true <5> + fields: + service.name: your_service_name <6> + service.version: your_service_version <6> + service.environment: your_service_environment <6> + ``` + + 1. Values from the decoded JSON object overwrite the fields that {{agent}} normally adds (type, source, offset, etc.) in case of conflicts. + 2. {{agent}} adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. + 3. {{agent}} will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. + 4. By default, the decoded JSON is placed under a "json" key in the output document. When set to `true`, the keys are copied top level in the output document. + 5. When set to `true`, custom fields are stored as top-level fields in the output document instead of being grouped under a fields sub-dictionary. + 6. The `service.name` (required), `service.version` (optional), and `service.environment` (optional) of the service you’re collecting logs from, used for log correlation. + +10. Give your agent policy a name. The agent policy defines the data your {{agent}} collects. +11. Save your integration to add it to your deployment. + + +## View logs [view-ecs-logs] + +Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}. + % What needs to be done: Align serverless/stateful % Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-ecs-application.md % - [ ] ./raw-migrated-files/docs-content/serverless/observability-ecs-application-logs.md % Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): From bfee032d80c1ab0925e23ea85f89889ed0e03cd8 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 14:53:44 -0600 Subject: [PATCH 04/23] add filter and aggregate logs --- ...observability-filter-and-aggregate-logs.md | 339 ----------------- .../logs-filter-and-aggregate.md | 337 ----------------- raw-migrated-files/toc.yml | 2 - .../logs/filter-aggregate-logs.md | 341 +++++++++++++++++- 4 files changed, 331 insertions(+), 688 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-filter-and-aggregate-logs.md delete mode 100644 raw-migrated-files/observability-docs/observability/logs-filter-and-aggregate.md diff --git a/raw-migrated-files/docs-content/serverless/observability-filter-and-aggregate-logs.md b/raw-migrated-files/docs-content/serverless/observability-filter-and-aggregate-logs.md deleted file mode 100644 index fe2ad8de6b..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-filter-and-aggregate-logs.md +++ /dev/null @@ -1,339 +0,0 @@ -# Filter and aggregate logs [observability-filter-and-aggregate-logs] - -Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you’ve extracted from your log data. - -This guide shows you how to: - -* [Filter logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Narrow down your log data by applying specific criteria. -* [Aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-aggregate): Analyze and summarize data to find patterns and gain insight. - - -## Before you get started [logs-filter-and-aggregate-prereq] - -::::{admonition} Required role -:class: note - -The **Admin** role or higher is required to create ingest pipelines and set the index template. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). - -:::: - - -The examples on this page use the following ingest pipeline and index template, which you can set in **Developer Tools**. If you haven’t used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation. - -Set the ingest pipeline with the following command: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp log level and host ip", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" - } - } - ] -} -``` - -Set the index template with the following command: - -```console -PUT _index_template/logs-example-default-template -{ - "index_patterns": [ "logs-example-*" ], - "data_stream": { }, - "priority": 500, - "template": { - "settings": { - "index.default_pipeline":"logs-example-default" - } - }, - "composed_of": [ - "logs-mappings", - "logs-settings", - "logs@custom", - "ecs@dynamic_templates" - ], - "ignore_missing_component_templates": ["logs@custom"] -} -``` - - -## Filter logs [logs-filter] - -Filter your data using the fields you’ve extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways: - -* [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer): Filter and visualize log data in Logs Explorer. -* [Filter logs with Query DSL](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-qdsl): Filter log data from Developer Tools using Query DSL. - - -### Filter logs in Logs Explorer [logs-filter-logs-explorer] - -Logs Explorer is a tool that automatically provides views of your log data based on integrations and data streams. To open Logs Explorer, go to **Discover** and select the **Logs Explorer** tab. - -From Logs Explorer, you can use the [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data that’s displayed. For example, you might want to look into an event that occurred within a specific time range. - -Add some logs with varying timestamps and log levels to your data stream: - -1. In your Observability project, go to **Developer Tools**. -2. In the **Console** tab, run the following command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } -{ "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } -{ "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } -``` - -For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer: - -1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: - - ```text - log.level: ("ERROR" or "WARN") - ``` - -2. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`. - - ![Set the time range start date](../../../images/serverless-logs-start-date.png "") - -3. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`. - - ![Set the time range end date](../../../images/serverless-logs-end-date.png "") - - -Under the **Documents** tab, you’ll see the filtered log data matching your query. - -:::{image} ../../../images/serverless-logs-kql-filter.png -:alt: logs kql filter -:class: screenshot -::: - -For more on using Logs Explorer, refer to the [Discover](../../../explore-analyze/discover.md) documentation. - - -### Filter logs with Query DSL [logs-filter-qdsl] - -[Query DSL](../../../explore-analyze/query-filter/languages/querydsl.md) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer Tools**. - -For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-range-query.md) to filter for the specific timestamp range and a [term query](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-term-query.md) to filter for `WARN` and `ERROR` log levels. - -First, from **Developer Tools**, add some logs with varying timestamps and log levels to your data stream with the following command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } -{ "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } -{ "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } -``` - -Let’s say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`. - -```console -POST /logs-example-default/_search -{ - "query": { - "bool": { - "filter": [ - { - "range": { - "@timestamp": { - "gte": "2023-09-14T00:00:00", - "lte": "2023-09-15T23:59:59" - } - } - }, - { - "terms": { - "log.level": ["WARN", "ERROR"] - } - } - ] - } - } -} -``` - -The filtered results should show `WARN` and `ERROR` logs that occurred within the timestamp range: - -```JSON -{ - ... - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.09.25-000001", - "_id": "JkwPzooBTddK4OtTQToP", - "_score": 0, - "_source": { - "message": "192.168.1.101 Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "@timestamp": "2023-09-15T08:15:20.234Z" - } - }, - { - "_index": ".ds-logs-example-default-2023.09.25-000001", - "_id": "A5YSzooBMYFrNGNwH75O", - "_score": 0, - "_source": { - "message": "192.168.1.102 Critical system failure detected.", - "log": { - "level": "ERROR" - }, - "@timestamp": "2023-09-14T10:30:45.789Z" - } - } - ] - } -} -``` - - -## Aggregate logs [logs-aggregate] - -Use aggregation to analyze and summarize your log data to find patterns and gain insight. [Bucket aggregations](asciidocalypse://docs/elasticsearch/docs/reference/data-analysis/aggregations/bucket.md) organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. - -For example, you might want to understand error distribution by analyzing the count of logs per log level. - -First, from **Developer Tools**, add some logs with varying log levels to your data stream using the following command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } -{ "create": {} } -{ "message": "2023-09-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." } -{ "create": {} } -{ "message": "2023-09-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." } -{ "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } -{ "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } -{ "create": {} } -{ "message": "2023-09-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." } -``` - -Next, run this command to aggregate your log data using the `log.level` field: - -```console -POST logs-example-default/_search?size=0&filter_path=aggregations -{ -"size": 0, <1> -"aggs": { - "log_level_distribution": { - "terms": { - "field": "log.level" - } - } - } -} -``` - -1. Searches with an aggregation return both the query results and the aggregation, so you would see the logs matching the data and the aggregation. Setting `size` to `0` limits the results to aggregations. - - -The results should show the number of logs in each log level: - -```JSON -{ - "aggregations": { - "error_distribution": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "ERROR", - "doc_count": 2 - }, - { - "key": "INFO", - "doc_count": 2 - }, - { - "key": "WARN", - "doc_count": 2 - }, - { - "key": "DEBUG", - "doc_count": 1 - } - ] - } - } -} -``` - -You can also combine aggregations and queries. For example, you might want to limit the scope of the previous aggregation by adding a range query: - -```console -GET /logs-example-default/_search -{ - "size": 0, - "query": { - "range": { - "@timestamp": { - "gte": "2023-09-14T00:00:00", - "lte": "2023-09-15T23:59:59" - } - } - }, - "aggs": { - "my-agg-name": { - "terms": { - "field": "log.level" - } - } - } -} -``` - -The results should show an aggregate of logs that occurred within your timestamp range: - -```JSON -{ - ... - "hits": { - ... - "hits": [] - }, - "aggregations": { - "my-agg-name": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "WARN", - "doc_count": 2 - }, - { - "key": "ERROR", - "doc_count": 1 - }, - { - "key": "INFO", - "doc_count": 1 - } - ] - } - } -} -``` - -For more on aggregation types and available aggregations, refer to the [Aggregations](../../../explore-analyze/query-filter/aggregations.md) documentation. diff --git a/raw-migrated-files/observability-docs/observability/logs-filter-and-aggregate.md b/raw-migrated-files/observability-docs/observability/logs-filter-and-aggregate.md deleted file mode 100644 index 4062608917..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-filter-and-aggregate.md +++ /dev/null @@ -1,337 +0,0 @@ -# Filter and aggregate logs [logs-filter-and-aggregate] - -Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you’ve extracted from your log data. - -This guide shows you how to: - -* [Filter logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter) — Narrow down your log data by applying specific criteria. -* [Aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-aggregate) — Analyze and summarize data to find patterns and gain insight. - - -## Before you get started [logs-filter-and-aggregate-prereq] - -The examples on this page use the following ingest pipeline and index template, which you can set in **Developer tools**. If you haven’t used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation. - -Set the ingest pipeline with the following command: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp log level and host ip", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" - } - } - ] -} -``` - -Set the index template with the following command: - -```console -PUT _index_template/logs-example-default-template -{ - "index_patterns": [ "logs-example-*" ], - "data_stream": { }, - "priority": 500, - "template": { - "settings": { - "index.default_pipeline":"logs-example-default" - } - }, - "composed_of": [ - "logs-mappings", - "logs-settings", - "logs@custom", - "ecs@dynamic_templates" - ], - "ignore_missing_component_templates": ["logs@custom"] -} -``` - - -## Filter logs [logs-filter] - -Filter your data using the fields you’ve extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways: - -* [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer) – Filter and visualize log data in {{kib}} using Logs Explorer. -* [Filter logs with Query DSL](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-qdsl) – Filter log data from Developer tools using Query DSL. - - -### Filter logs in Logs Explorer [logs-filter-logs-explorer] - -Logs Explorer is a {{kib}} tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -From Logs Explorer, you can use the [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data displayed in Logs Explorer. For example, you might want to look into an event that occurred within a specific time range. - -Add some logs with varying timestamps and log levels to your data stream: - -1. To open **Console**, find `Dev Tools` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. In the **Console** tab, run the following command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } -{ "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } -{ "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } -``` - -For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer: - -1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: - - ```text - log.level: ("ERROR" or "WARN") - ``` - -2. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`. - - :::{image} ../../../images/observability-logs-start-date.png - :alt: Set the start date for your time range - :class: screenshot - ::: - -3. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`. - - :::{image} ../../../images/observability-logs-end-date.png - :alt: Set the end date for your time range - :class: screenshot - ::: - - -Under the **Documents** tab, you’ll see the filtered log data matching your query. - -:::{image} ../../../images/observability-logs-kql-filter.png -:alt: Filter data by log level using KQL -:class: screenshot -::: - -For more on using Logs Explorer, refer to the [Discover](../../../explore-analyze/discover.md) documentation. - - -### Filter logs with Query DSL [logs-filter-qdsl] - -[Query DSL](../../../explore-analyze/query-filter/languages/querydsl.md) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer tools**. - -For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-range-query.md) to filter for the specific timestamp range and a [term query](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-term-query.md) to filter for `WARN` and `ERROR` log levels. - -First, from **Developer tools**, add some logs with varying timestamps and log levels to your data stream with the following command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } -{ "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } -{ "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } -``` - -Let’s say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`. - -```console -POST /logs-example-default/_search -{ - "query": { - "bool": { - "filter": [ - { - "range": { - "@timestamp": { - "gte": "2023-09-14T00:00:00", - "lte": "2023-09-15T23:59:59" - } - } - }, - { - "terms": { - "log.level": ["WARN", "ERROR"] - } - } - ] - } - } -} -``` - -The filtered results should show `WARN` and `ERROR` logs that occurred within the timestamp range: - -```JSON -{ - ... - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.09.25-000001", - "_id": "JkwPzooBTddK4OtTQToP", - "_score": 0, - "_source": { - "message": "192.168.1.101 Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "@timestamp": "2023-09-15T08:15:20.234Z" - } - }, - { - "_index": ".ds-logs-example-default-2023.09.25-000001", - "_id": "A5YSzooBMYFrNGNwH75O", - "_score": 0, - "_source": { - "message": "192.168.1.102 Critical system failure detected.", - "log": { - "level": "ERROR" - }, - "@timestamp": "2023-09-14T10:30:45.789Z" - } - } - ] - } -} -``` - - -## Aggregate logs [logs-aggregate] - -Use aggregation to analyze and summarize your log data to find patterns and gain insight. [Bucket aggregations](asciidocalypse://docs/elasticsearch/docs/reference/data-analysis/aggregations/bucket.md) organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. - -For example, you might want to understand error distribution by analyzing the count of logs per log level. - -First, from **Developer tools**, add some logs with varying log levels to your data stream using the following command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } -{ "create": {} } -{ "message": "2023-09-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." } -{ "create": {} } -{ "message": "2023-09-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." } -{ "create": {} } -{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } -{ "create": {} } -{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } -{ "create": {} } -{ "message": "2023-09-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." } -``` - -Next, run this command to aggregate your log data using the `log.level` field: - -```console -POST logs-example-default/_search?size=0&filter_path=aggregations -{ -"size": 0,<1> -"aggs": { - "log_level_distribution": { - "terms": { - "field": "log.level" - } - } - } -} -``` - -1. Searches with an aggregation return both the query results and the aggregation, so you would see the logs matching the data and the aggregation. Setting `size` to `0` limits the results to aggregations. - - -The results should show the number of logs in each log level: - -```JSON -{ - "aggregations": { - "error_distribution": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "ERROR", - "doc_count": 2 - }, - { - "key": "INFO", - "doc_count": 2 - }, - { - "key": "WARN", - "doc_count": 2 - }, - { - "key": "DEBUG", - "doc_count": 1 - } - ] - } - } -} -``` - -You can also combine aggregations and queries. For example, you might want to limit the scope of the previous aggregation by adding a range query: - -```console -GET /logs-example-default/_search -{ - "size": 0, - "query": { - "range": { - "@timestamp": { - "gte": "2023-09-14T00:00:00", - "lte": "2023-09-15T23:59:59" - } - } - }, - "aggs": { - "my-agg-name": { - "terms": { - "field": "log.level" - } - } - } -} -``` - -The results should show an aggregate of logs that occurred within your timestamp range: - -```JSON -{ - ... - "hits": { - ... - "hits": [] - }, - "aggregations": { - "my-agg-name": { - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "WARN", - "doc_count": 2 - }, - { - "key": "ERROR", - "doc_count": 1 - }, - { - "key": "INFO", - "doc_count": 1 - } - ] - } - } -} -``` - -For more on aggregation types and available aggregations, refer to the [Aggregations](../../../explore-analyze/query-filter/aggregations.md) documentation. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index c40e207ce5..db47dd95a5 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -226,7 +226,6 @@ toc: - file: docs-content/serverless/observability-correlate-application-logs.md - file: docs-content/serverless/observability-discover-and-explore-logs.md - file: docs-content/serverless/observability-ecs-application-logs.md - - file: docs-content/serverless/observability-filter-and-aggregate-logs.md - file: docs-content/serverless/observability-get-started.md - file: docs-content/serverless/observability-log-monitoring.md - file: docs-content/serverless/observability-monitor-datasets.md @@ -467,7 +466,6 @@ toc: - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - - file: observability-docs/observability/logs-filter-and-aggregate.md - file: observability-docs/observability/logs-parse.md - file: observability-docs/observability/logs-plaintext.md - file: observability-docs/observability/logs-stream.md diff --git a/solutions/observability/logs/filter-aggregate-logs.md b/solutions/observability/logs/filter-aggregate-logs.md index f4357c3d71..86b3eb4fd4 100644 --- a/solutions/observability/logs/filter-aggregate-logs.md +++ b/solutions/observability/logs/filter-aggregate-logs.md @@ -4,21 +4,342 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-filter-and-aggregate-logs.html --- -# Filter and aggregate logs +# Filter and aggregate logs [observability-filter-and-aggregate-logs] -% What needs to be done: Align serverless/stateful +Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. You can filter and aggregate based on structured fields like timestamps, log levels, and IP addresses that you’ve extracted from your log data. -% Use migrated content from existing pages that map to this page: +This guide shows you how to: -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-filter-and-aggregate.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-filter-and-aggregate-logs.md +* [Filter logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Narrow down your log data by applying specific criteria. +* [Aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-aggregate): Analyze and summarize data to find patterns and gain insight. -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): -$$$logs-filter-logs-explorer$$$ +## Before you get started [logs-filter-and-aggregate-prereq] -$$$logs-aggregate$$$ +::::{admonition} Required role +:class: note -$$$logs-filter-qdsl$$$ +**For Observability serverless projects**, the **Admin** role or higher is required to create ingest pipelines and set the index template. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). -$$$logs-filter$$$ \ No newline at end of file +:::: + + +The examples on this page use the following ingest pipeline and index template, which you can set in **Developer Tools**. If you haven’t used ingest pipelines and index templates to parse your log data and extract structured fields yet, start with the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation. + +Set the ingest pipeline with the following command: + +```console +PUT _ingest/pipeline/logs-example-default +{ + "description": "Extracts the timestamp log level and host ip", + "processors": [ + { + "dissect": { + "field": "message", + "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" + } + } + ] +} +``` + +Set the index template with the following command: + +```console +PUT _index_template/logs-example-default-template +{ + "index_patterns": [ "logs-example-*" ], + "data_stream": { }, + "priority": 500, + "template": { + "settings": { + "index.default_pipeline":"logs-example-default" + } + }, + "composed_of": [ + "logs-mappings", + "logs-settings", + "logs@custom", + "ecs@dynamic_templates" + ], + "ignore_missing_component_templates": ["logs@custom"] +} +``` + + +## Filter logs [logs-filter] + +Filter your data using the fields you’ve extracted so you can focus on log data with specific log levels, timestamp ranges, or host IPs. You can filter your log data in different ways: + +* [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer): Filter and visualize log data in Logs Explorer. +* [Filter logs with Query DSL](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-qdsl): Filter log data from Developer Tools using Query DSL. + + +### Filter logs in Logs Explorer [logs-filter-logs-explorer] + +Logs Explorer is a tool that automatically provides views of your log data based on integrations and data streams. To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + +From Logs Explorer, you can use the [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) in the search bar to narrow down the log data that’s displayed. For example, you might want to look into an event that occurred within a specific time range. + +Add some logs with varying timestamps and log levels to your data stream: + +1. In your Observability project, go to **Developer Tools**. +2. In the **Console** tab, run the following command: + +```console +POST logs-example-default/_bulk +{ "create": {} } +{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "create": {} } +{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } +{ "create": {} } +{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } +{ "create": {} } +{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } +``` + +For this example, let’s look for logs with a `WARN` or `ERROR` log level that occurred on September 14th or 15th. From Logs Explorer: + +1. Add the following KQL query in the search bar to filter for logs with log levels of `WARN` or `ERROR`: + + ```text + log.level: ("ERROR" or "WARN") + ``` + +2. Click the current time range, select **Absolute**, and set the **Start date** to `Sep 14, 2023 @ 00:00:00.000`. + + ![Set the time range start date](../../../images/serverless-logs-start-date.png "") + +3. Click the end of the current time range, select **Absolute**, and set the **End date** to `Sep 15, 2023 @ 23:59:59.999`. + + ![Set the time range end date](../../../images/serverless-logs-end-date.png "") + + +Under the **Documents** tab, you’ll see the filtered log data matching your query. + +:::{image} ../../../images/serverless-logs-kql-filter.png +:alt: logs kql filter +:class: screenshot +::: + +For more on using Logs Explorer, refer to the [Discover](../../../explore-analyze/discover.md) documentation. + + +### Filter logs with Query DSL [logs-filter-qdsl] + +[Query DSL](../../../explore-analyze/query-filter/languages/querydsl.md) is a JSON-based language that sends requests and retrieves data from indices and data streams. You can filter your log data using Query DSL from **Developer Tools**. + +For example, you might want to troubleshoot an issue that happened on a specific date or at a specific time. To do this, use a boolean query with a [range query](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-range-query.md) to filter for the specific timestamp range and a [term query](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-term-query.md) to filter for `WARN` and `ERROR` log levels. + +First, from **Developer Tools**, add some logs with varying timestamps and log levels to your data stream with the following command: + +```console +POST logs-example-default/_bulk +{ "create": {} } +{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "create": {} } +{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } +{ "create": {} } +{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } +{ "create": {} } +{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } +``` + +Let’s say you want to look into an event that occurred between September 14th and 15th. The following boolean query filters for logs with timestamps during those days that also have a log level of `ERROR` or `WARN`. + +```console +POST /logs-example-default/_search +{ + "query": { + "bool": { + "filter": [ + { + "range": { + "@timestamp": { + "gte": "2023-09-14T00:00:00", + "lte": "2023-09-15T23:59:59" + } + } + }, + { + "terms": { + "log.level": ["WARN", "ERROR"] + } + } + ] + } + } +} +``` + +The filtered results should show `WARN` and `ERROR` logs that occurred within the timestamp range: + +```JSON +{ + ... + "hits": { + ... + "hits": [ + { + "_index": ".ds-logs-example-default-2023.09.25-000001", + "_id": "JkwPzooBTddK4OtTQToP", + "_score": 0, + "_source": { + "message": "192.168.1.101 Disk usage exceeds 90%.", + "log": { + "level": "WARN" + }, + "@timestamp": "2023-09-15T08:15:20.234Z" + } + }, + { + "_index": ".ds-logs-example-default-2023.09.25-000001", + "_id": "A5YSzooBMYFrNGNwH75O", + "_score": 0, + "_source": { + "message": "192.168.1.102 Critical system failure detected.", + "log": { + "level": "ERROR" + }, + "@timestamp": "2023-09-14T10:30:45.789Z" + } + } + ] + } +} +``` + + +## Aggregate logs [logs-aggregate] + +Use aggregation to analyze and summarize your log data to find patterns and gain insight. [Bucket aggregations](asciidocalypse://docs/elasticsearch/docs/reference/data-analysis/aggregations/bucket.md) organize log data into meaningful groups making it easier to identify patterns, trends, and anomalies within your logs. + +For example, you might want to understand error distribution by analyzing the count of logs per log level. + +First, from **Developer Tools**, add some logs with varying log levels to your data stream using the following command: + +```console +POST logs-example-default/_bulk +{ "create": {} } +{ "message": "2023-09-15T08:15:20.234Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "create": {} } +{ "message": "2023-09-14T10:30:45.789Z ERROR 192.168.1.102 Critical system failure detected." } +{ "create": {} } +{ "message": "2023-09-15T12:45:55.123Z INFO 192.168.1.103 Application successfully started." } +{ "create": {} } +{ "message": "2023-09-14T15:20:10.789Z WARN 192.168.1.104 Network latency exceeding threshold." } +{ "create": {} } +{ "message": "2023-09-10T14:20:45.789Z ERROR 192.168.1.105 Database connection lost." } +{ "create": {} } +{ "message": "2023-09-20T09:40:32.345Z INFO 192.168.1.106 User logout initiated." } +{ "create": {} } +{ "message": "2023-09-21T15:20:55.678Z DEBUG 192.168.1.102 Database connection established." } +``` + +Next, run this command to aggregate your log data using the `log.level` field: + +```console +POST logs-example-default/_search?size=0&filter_path=aggregations +{ +"size": 0, <1> +"aggs": { + "log_level_distribution": { + "terms": { + "field": "log.level" + } + } + } +} +``` + +1. Searches with an aggregation return both the query results and the aggregation, so you would see the logs matching the data and the aggregation. Setting `size` to `0` limits the results to aggregations. + + +The results should show the number of logs in each log level: + +```JSON +{ + "aggregations": { + "error_distribution": { + "doc_count_error_upper_bound": 0, + "sum_other_doc_count": 0, + "buckets": [ + { + "key": "ERROR", + "doc_count": 2 + }, + { + "key": "INFO", + "doc_count": 2 + }, + { + "key": "WARN", + "doc_count": 2 + }, + { + "key": "DEBUG", + "doc_count": 1 + } + ] + } + } +} +``` + +You can also combine aggregations and queries. For example, you might want to limit the scope of the previous aggregation by adding a range query: + +```console +GET /logs-example-default/_search +{ + "size": 0, + "query": { + "range": { + "@timestamp": { + "gte": "2023-09-14T00:00:00", + "lte": "2023-09-15T23:59:59" + } + } + }, + "aggs": { + "my-agg-name": { + "terms": { + "field": "log.level" + } + } + } +} +``` + +The results should show an aggregate of logs that occurred within your timestamp range: + +```JSON +{ + ... + "hits": { + ... + "hits": [] + }, + "aggregations": { + "my-agg-name": { + "doc_count_error_upper_bound": 0, + "sum_other_doc_count": 0, + "buckets": [ + { + "key": "WARN", + "doc_count": 2 + }, + { + "key": "ERROR", + "doc_count": 1 + }, + { + "key": "INFO", + "doc_count": 1 + } + ] + } + } +} +``` + +For more on aggregation types and available aggregations, refer to the [Aggregations](../../../explore-analyze/query-filter/aggregations.md) documentation. \ No newline at end of file From 1ae36e2d2d88496db913cfb29254703499f3420b Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 14:58:27 -0600 Subject: [PATCH 05/23] add explore logs --- ...observability-discover-and-explore-logs.md | 68 ----------------- .../observability/explore-logs.md | 73 ------------------ raw-migrated-files/toc.yml | 2 - solutions/observability/logs/logs-explorer.md | 75 +++++++++++++++++-- 4 files changed, 69 insertions(+), 149 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-discover-and-explore-logs.md delete mode 100644 raw-migrated-files/observability-docs/observability/explore-logs.md diff --git a/raw-migrated-files/docs-content/serverless/observability-discover-and-explore-logs.md b/raw-migrated-files/docs-content/serverless/observability-discover-and-explore-logs.md deleted file mode 100644 index 0dda75be2c..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-discover-and-explore-logs.md +++ /dev/null @@ -1,68 +0,0 @@ -# Explore logs [observability-discover-and-explore-logs] - -With **Logs Explorer**, based on Discover, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. - -Go to Logs Explorer by opening **Discover** from the navigation menu, and selecting the **Logs Explorer** tab. - -:::{image} ../../../images/serverless-log-explorer.png -:alt: Screen capture of the Logs Explorer -:class: screenshot -::: - - -## Required {{kib}} privileges [observability-discover-and-explore-logs-required-kib-privileges] - -Viewing data in Logs Explorer requires `read` privileges for **Discover** and **Integrations**. For more on assigning Kibana privileges, refer to the [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs. - - -## Find your logs [observability-discover-and-explore-logs-find-your-logs] - -By default, Logs Explorer shows all of your logs according to the index patterns set in the **logs source** advanced setting. Update this setting by going to *Management* → *Advanced Settings* and searching for *logs source*. - -If you need to focus on logs from a specific integrations, select the integration from the logs menu: - -:::{image} ../../../images/serverless-log-menu.png -:alt: Screen capture of log menu -:class: screenshot -::: - -Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Logs Explorer, refer to [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer). - - -## Review log data in the documents table [observability-discover-and-explore-logs-review-log-data-in-the-documents-table] - -The documents table in Logs Explorer functions similarly to the table in Discover. You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover. - -Refer to the [Discover](../../../explore-analyze/discover.md) documentation for more information on updating the table. - - -#### Actions column [observability-discover-and-explore-logs-actions-column] - -The actions column provides access to additional information about your logs. - -**Expand:** (![expand icon](../../../images/serverless-expand.svg "")) Open the log details to get an in-depth look at an individual log file. - -**Degraded document indicator:** (![degraded document indicator icon](../../../images/serverless-pagesSelect.svg "")) Shows if any of the document’s fields were ignored when it was indexed. Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored. - -**Stacktrace indicator:** (![stacktrace indicator icon](../../../images/serverless-apmTrace.svg "")) Shows if the document contains stack traces. This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces. - - -## View log details [observability-discover-and-explore-logs-view-log-details] - -Click the expand icon (![expand icon](../../../images/serverless-expand.svg "")) in the **Actions** column to get an in-depth look at an individual log file. - -These details provide immediate feedback and context for what’s happening and where it’s happening for each log. From here, you can quickly debug errors and investigate the services where errors have occurred. - -The following actions help you filter and focus on specific fields in the log details: - -* **Filter for value (![filter for value icon](../../../images/serverless-plusInCircle.svg "")):** Show logs that contain the specific field value. -* **Filter out value (![filter out value icon](../../../images/serverless-minusInCircle.svg "")):** Show logs that do *not* contain the specific field value. -* **Filter for field present (![filter for present icon](../../../images/serverless-filter.svg "")):** Show logs that contain the specific field. -* **Toggle column in table (![toggle column in table icon](../../../images/serverless-listAdd.svg "")):** Add or remove a column for the field to the main Logs Explorer table. - - -## View log quality issues [observability-discover-and-explore-logs-view-log-quality-issues] - -From the log details of a document with ignored fields, as shown by the degraded document indicator ![degraded document indicator icon](../../../images/serverless-pagesSelect.svg ""), expand the **Quality issues** section to see the name and value of the fields that were ignored. Select **Data set details** to open the **Data Set Quality** page. Here you can monitor your data sets and investigate any issues. - -The **Data Set Details** page is also accessible from **Project settings*** → ***Management** → **Data Set Quality**. Refer to [Monitor data sets](../../../solutions/observability/data-set-quality-monitoring.md) for more information. diff --git a/raw-migrated-files/observability-docs/observability/explore-logs.md b/raw-migrated-files/observability-docs/observability/explore-logs.md deleted file mode 100644 index e280bd35a8..0000000000 --- a/raw-migrated-files/observability-docs/observability/explore-logs.md +++ /dev/null @@ -1,73 +0,0 @@ -# Logs Explorer [explore-logs] - -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - - -With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. - -To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -:::{image} ../../../images/observability-log-explorer.png -:alt: Screen capture of the Logs Explorer -:class: screenshot -::: - - -## Required {{kib}} privileges [logs-explorer-privileges] - -Viewing data in Logs Explorer requires `read` privileges for **Discover**, **Index**, **Logs**, and **Integrations**. For more on assigning {{kib}} privileges, refer to the [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs. - - -## Find your logs [find-your-logs] - -By default, Logs Explorer shows all of your logs, according to the index patterns set in the **logs sources** advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -If you need to focus on logs from a specific integration, select the integration from the logs menu: - -:::{image} ../../../images/observability-log-menu.png -:alt: Screen capture of log menu -:class: screenshot -::: - -Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Logs Explorer, refer to [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer). - - -## Review log data in the documents table [review-log-data-in-the-documents-table] - -The documents table in Logs Explorer functions similarly to the table in Discover. You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover. - -Refer to the [Discover](../../../explore-analyze/discover.md) documentation for more information on updating the table. - - -### Actions column [actions-column] - -The actions column provides access to additional information about your logs. - -**Expand:** ![The icon to expand log details](../../../images/observability-expand-icon.png "") Open the log details to get an in-depth look at an individual log file. - -**Degraded document indicator:** ![The icon that shows ignored fields](../../../images/observability-pagesSelect-icon.png "") Shows if any of the document’s fields were ignored when it was indexed. Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored. - -**Stacktrace indicator:** ![The icon that shows if a document contains stack traces](../../../images/observability-apmTrace-icon.png "") Shows if the document contains stack traces. This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces. - - -## View log details [view-log-details] - -Click the expand icon ![icon to open log details](../../../images/observability-expand-icon.png "") to get an in-depth look at an individual log file. - -These details provide immediate feedback and context for what’s happening and where it’s happening for each log. From here, you can quickly debug errors and investigate the services where errors have occurred. - -The following actions help you filter and focus on specific fields in the log details: - -* **Filter for value (![filter for value icon](../../../images/observability-plusInCircle.png "")):** Show logs that contain the specific field value. -* **Filter out value (![filter out value icon](../../../images/observability-minusInCircle.png "")):** Show logs that do **not** contain the specific field value. -* **Filter for field present (![filter for present icon](../../../images/observability-filter.png "")):** Show logs that contain the specific field. -* **Toggle column in table (![toggle column in table icon](../../../images/observability-listAdd.png "")):** Add or remove a column for the field to the main Logs Explorer table. - - -## View log data set details [view-log-data-set-details] - -Go to **Data Set Quality** to view more details about your data sets and monitor their overall quality. To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). - -Refer to [*Data set quality*](../../../solutions/observability/data-set-quality-monitoring.md) for more information. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index db47dd95a5..f9b6b88496 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -224,7 +224,6 @@ toc: - file: docs-content/serverless/observability-apm-get-started.md - file: docs-content/serverless/observability-apm-traces.md - file: docs-content/serverless/observability-correlate-application-logs.md - - file: docs-content/serverless/observability-discover-and-explore-logs.md - file: docs-content/serverless/observability-ecs-application-logs.md - file: docs-content/serverless/observability-get-started.md - file: docs-content/serverless/observability-log-monitoring.md @@ -461,7 +460,6 @@ toc: - file: observability-docs/observability/apm-traces.md - file: observability-docs/observability/application-and-service-monitoring.md - file: observability-docs/observability/application-logs.md - - file: observability-docs/observability/explore-logs.md - file: observability-docs/observability/incident-management.md - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md diff --git a/solutions/observability/logs/logs-explorer.md b/solutions/observability/logs/logs-explorer.md index 7703a53bc4..c0b4a74068 100644 --- a/solutions/observability/logs/logs-explorer.md +++ b/solutions/observability/logs/logs-explorer.md @@ -4,13 +4,76 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-discover-and-explore-logs.html --- -# Logs Explorer +# Logs Explorer [explore-logs] -% What needs to be done: Align serverless/stateful +::::{warning} +This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +:::: -% Scope notes: Needs to be updated to use Discover instead of logs explorer. The Logs app will not be available by 9.0. -% Use migrated content from existing pages that map to this page: +With **Logs Explorer**, you can quickly search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also customize and save your searches and place them on a dashboard. Instead of having to log into different servers, change directories, and view individual files, all your logs are available in a single view. -% - [ ] ./raw-migrated-files/observability-docs/observability/explore-logs.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-discover-and-explore-logs.md \ No newline at end of file +To open **Logs Explorer**, find `Logs Explorer` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + +:::{image} ../../../images/observability-log-explorer.png +:alt: Screen capture of the Logs Explorer +:class: screenshot +::: + + +## Required {{kib}} privileges [logs-explorer-privileges] + +Viewing data in Logs Explorer requires `read` privileges for **Discover**, **Index**, **Logs**, and **Integrations**. For more on assigning {{kib}} privileges, refer to the [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) docs. + + +## Find your logs [find-your-logs] + +By default, Logs Explorer shows all of your logs, according to the index patterns set in the **logs sources** advanced setting. To open **Advanced settings**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + +If you need to focus on logs from a specific integration, select the integration from the logs menu: + +:::{image} ../../../images/observability-log-menu.png +:alt: Screen capture of log menu +:class: screenshot +::: + +Once you have the logs you want to focus on displayed, you can drill down further to find the information you need. For more on filtering your data in Logs Explorer, refer to [Filter logs in Logs Explorer](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter-logs-explorer). + + +## Review log data in the documents table [review-log-data-in-the-documents-table] + +The documents table in Logs Explorer functions similarly to the table in Discover. You can add fields, order table columns, sort fields, and update the row height in the same way you would in Discover. + +Refer to the [Discover](../../../explore-analyze/discover.md) documentation for more information on updating the table. + + +### Actions column [actions-column] + +The actions column provides access to additional information about your logs. + +**Expand:** ![The icon to expand log details](../../../images/observability-expand-icon.png "") Open the log details to get an in-depth look at an individual log file. + +**Degraded document indicator:** ![The icon that shows ignored fields](../../../images/observability-pagesSelect-icon.png "") Shows if any of the document’s fields were ignored when it was indexed. Ignored fields could indicate malformed fields or other issues with your document. Use this information to investigate and determine why fields are being ignored. + +**Stacktrace indicator:** ![The icon that shows if a document contains stack traces](../../../images/observability-apmTrace-icon.png "") Shows if the document contains stack traces. This indicator makes it easier to navigate through your documents and know if they contain additional information in the form of stack traces. + + +## View log details [view-log-details] + +Click the expand icon ![icon to open log details](../../../images/observability-expand-icon.png "") to get an in-depth look at an individual log file. + +These details provide immediate feedback and context for what’s happening and where it’s happening for each log. From here, you can quickly debug errors and investigate the services where errors have occurred. + +The following actions help you filter and focus on specific fields in the log details: + +* **Filter for value (![filter for value icon](../../../images/observability-plusInCircle.png "")):** Show logs that contain the specific field value. +* **Filter out value (![filter out value icon](../../../images/observability-minusInCircle.png "")):** Show logs that do **not** contain the specific field value. +* **Filter for field present (![filter for present icon](../../../images/observability-filter.png "")):** Show logs that contain the specific field. +* **Toggle column in table (![toggle column in table icon](../../../images/observability-listAdd.png "")):** Add or remove a column for the field to the main Logs Explorer table. + + +## View log data set details [view-log-data-set-details] + +Go to **Data Set Quality** to view more details about your data sets and monitor their overall quality. To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + +Refer to [*Data set quality*](../../../solutions/observability/data-set-quality-monitoring.md) for more information. \ No newline at end of file From 195095fc08f88b24e48c6178561b58e3b290376e Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 15:06:56 -0600 Subject: [PATCH 06/23] add parse logs --- .../observability-parse-log-data.md | 869 ----------------- .../observability/logs-parse.md | 850 ----------------- raw-migrated-files/toc.yml | 2 - .../observability/logs/parse-route-logs.md | 871 +++++++++++++++++- 4 files changed, 836 insertions(+), 1756 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-parse-log-data.md delete mode 100644 raw-migrated-files/observability-docs/observability/logs-parse.md diff --git a/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md b/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md deleted file mode 100644 index 216e5334ad..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-parse-log-data.md +++ /dev/null @@ -1,869 +0,0 @@ -# Parse and route logs [observability-parse-log-data] - -::::{admonition} Required role -:class: note - -The **Admin** role or higher is required to create ingest pipelines that parse and route logs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). - -:::: - - -If your log data is unstructured or semi-structured, you can parse it and break it into meaningful fields. You can use those fields to explore and analyze your data. For example, you can find logs within a specific timestamp range or filter logs by log level to focus on potential issues. - -After parsing, you can use the structured fields to further organize your logs by configuring a reroute processor to send specific logs to different target data streams. - -Refer to the following sections for more on parsing and organizing your log data: - -* [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields): Extract structured fields like timestamps, log levels, or IP addresses to make querying and filtering your data easier. -* [Reroute log data to specific data streams](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-reroute-log-data-to-specific-data-streams): Route data from the generic data stream to a target data stream for more granular control over data retention, permissions, and processing. - - -## Extract structured fields [observability-parse-log-data-extract-structured-fields] - -Make your logs more useful by extracting structured fields from your unstructured log data. Extracting structured fields makes it easier to search, analyze, and filter your log data. - -Follow the steps below to see how the following unstructured log data is indexed by default: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -``` - -Start by storing the document in the `logs-example-default` data stream: - -1. In your Observability project, go to **Developer Tools**. -2. In the **Console** tab, add the example log to your project using the following command: - - ```console - POST logs-example-default/_doc - { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - ``` - -3. Then, you can retrieve the document with the following search: - - ```console - GET /logs-example-default/_search - ``` - - -The results should look like this: - -```json -{ - ... - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.08.09-000001", - ... - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-09T17:19:27.73312243Z" - } - } - ] - } -} -``` - -Your project indexes the `message` field by default and adds a `@timestamp` field. Since there was no timestamp set, it’s set to `now`. At this point, you can search for phrases in the `message` field like `WARN` or `Disk usage exceeds`. For example, run the following command to search for the phrase `WARN` in the log’s `message` field: - -```console -GET logs-example-default/_search -{ - "query": { - "match": { - "message": { - "query": "WARN" - } - } - } -} -``` - -While you can search for phrases in the `message` field, you can’t use this field to filter log data. Your message, however, contains all of the following potential fields you can extract and use to filter and aggregate your log data: - -* **@timestamp** (`2023-08-08T13:45:12.123Z`): Extracting this field lets you sort logs by date and time. This is helpful when you want to view your logs in the order that they occurred or identify when issues happened. -* **log.level** (`WARN`): Extracting this field lets you filter logs by severity. This is helpful if you want to focus on high-severity WARN or ERROR-level logs, and reduce noise by filtering out low-severity INFO-level logs. -* **host.ip** (`192.168.1.101`): Extracting this field lets you filter logs by the host IP addresses. This is helpful if you want to focus on specific hosts that you’re having issues with or if you want to find disparities between hosts. -* **message** (`Disk usage exceeds 90%.`): You can search for phrases or words in the message field. - -::::{note} -These fields are part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). The ECS defines a common set of fields that you can use across your project when storing data, including log and metric data. - -:::: - - - -### Extract the `@timestamp` field [observability-parse-log-data-extract-the-timestamp-field] - -When you added the log to your project in the previous section, the `@timestamp` field showed when the log was added. The timestamp showing when the log actually occurred was in the unstructured `message` field: - -```json - ... - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", <1> - "@timestamp": "2023-08-09T17:19:27.73312243Z" <2> - } - ... -``` - -1. The timestamp in the `message` field shows when the log occurred. -2. The timestamp in the `@timestamp` field shows when the log was added to your project. - - -When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to your project. To do this, extract the timestamp from the unstructured `message` field to the structured `@timestamp` field by completing the following: - -1. [Use an ingest pipeline to extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) -2. [Test the pipeline with the simulate pipeline API](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api) -3. [Configure a data stream with an index template](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) -4. [Create a data stream](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-create-a-data-stream) - - -#### Use an ingest pipeline to extract the `@timestamp` field [observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field] - -Ingest pipelines consist of a series of processors that perform common transformations on incoming documents before they are indexed. To extract the `@timestamp` field from the example log, use an ingest pipeline with a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md). The dissect processor extracts structured fields from unstructured log messages based on a pattern you set. - -Your project can parse string timestamps that are in `yyyy-MM-dd'T'HH:mm:ss.SSSZ` and `yyyy-MM-dd` formats into date fields. Since the log example’s timestamp is in one of these formats, you don’t need additional processors. More complex or nonstandard timestamps require a [date processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/date-processor.md) to parse the timestamp into a date field. - -Use the following command to extract the timestamp from the `message` field into the `@timestamp` field: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{message}" - } - } - ] -} -``` - -The previous command sets the following values for your ingest pipeline: - -* `_ingest/pipeline/logs-example-default`: The name of the pipeline,`logs-example-default`, needs to match the name of your data stream. You’ll set up your data stream in the next section. For more information, refer to the [data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). -* `field`: The field you’re extracting data from, `message` in this case. -* `pattern`: The pattern of the elements in your log data. The `%{@timestamp} %{{message}}` pattern extracts the timestamp, `2023-08-08T13:45:12.123Z`, to the `@timestamp` field, while the rest of the message, `WARN 192.168.1.101 Disk usage exceeds 90%.`, stays in the `message` field. The dissect processor looks for the space as a separator defined by the pattern. - - -#### Test the pipeline with the simulate pipeline API [observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api] - -The [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) runs the ingest pipeline without storing any documents. This lets you verify your pipeline works using multiple documents. - -Run the following command to test your ingest pipeline with the simulate pipeline API. - -```console -POST _ingest/pipeline/logs-example-default/_simulate -{ - "docs": [ - { - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - } - ] -} -``` - -The results should show the `@timestamp` field extracted from the `message` field: - -```console -{ - "docs": [ - { - "doc": { - "_index": "_index", - "_id": "_id", - "_version": "-3", - "_source": { - "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-08T13:45:12.123Z" - }, - ... - } - } - ] -} -``` - -::::{note} -Make sure you’ve created the ingest pipeline using the `PUT` command in the previous section before using the simulate pipeline API. - -:::: - - - -#### Configure a data stream with an index template [observability-parse-log-data-configure-a-data-stream-with-an-index-template] - -After creating your ingest pipeline, run the following command to create an index template to configure your data stream’s backing indices: - -```console -PUT _index_template/logs-example-default-template -{ - "index_patterns": [ "logs-example-*" ], - "data_stream": { }, - "priority": 500, - "template": { - "settings": { - "index.default_pipeline":"logs-example-default" - } - }, - "composed_of": [ - "logs@mappings", - "logs@settings", - "logs@custom", - "ecs@mappings" - ], - "ignore_missing_component_templates": ["logs@custom"] -} -``` - -The previous command sets the following values for your index template: - -* `index_pattern`: Needs to match your log data stream. Naming conventions for data streams are `--`. In this example, your logs data stream is named `logs-example-*`. Data that matches this pattern will go through your pipeline. -* `data_stream`: Enables data streams. -* `priority`: Sets the priority of your index templates. Index templates with a higher priority take precedence. If a data stream matches multiple index templates, your project uses the template with the higher priority. Built-in templates have a priority of `200`, so use a priority higher than `200` for custom templates. -* `index.default_pipeline`: The name of your ingest pipeline. `logs-example-default` in this case. -* `composed_of`: Here you can set component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. Elastic has several built-in templates to help when ingesting your log data. - -The example index template above sets the following component templates: - -* `logs@mappings`: general mappings for log data streams that include disabling automatic date detection from `string` fields and specifying mappings for [`data_stream` ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-data_stream.md). -* `logs@settings`: general settings for log data streams including the following: - - * The default lifecycle policy that rolls over when the primary shard reaches 50 GB or after 30 days. - * The default pipeline uses the ingest timestamp if there is no specified `@timestamp` and places a hook for the `logs@custom` pipeline. If a `logs@custom` pipeline is installed, it’s applied to logs ingested into this data stream. - * Sets the [`ignore_malformed`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/ignore-malformed.md) flag to `true`. When ingesting a large batch of log data, a single malformed field like an IP address can cause the entire batch to fail. When set to true, malformed fields with a mapping type that supports this flag are still processed. - * `logs@custom`: a predefined component template that is not installed by default. Use this name to install a custom component template to override or extend any of the default mappings or settings. - * `ecs@mappings`: dynamic templates that automatically ensure your data stream mappings comply with the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). - - - -#### Create a data stream [observability-parse-log-data-create-a-data-stream] - -Create your data stream using the [data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). Name your data stream to match the name of your ingest pipeline, `logs-example-default` in this case. Post the example log to your data stream with this command: - -```console -POST logs-example-default/_doc -{ - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." -} -``` - -View your documents using this command: - -```console -GET /logs-example-default/_search -``` - -You should see the pipeline has extracted the `@timestamp` field: - -```json -{ - ... - { - ... - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.08.09-000001", - "_id": "RsWy3IkB8yCtA5VGOKLf", - "_score": 1, - "_source": { - "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-08T13:45:12.123Z" <1> - } - } - ] - } - } -} -``` - -1. The extracted `@timestamp` field. - - -You can now use the `@timestamp` field to sort your logs by the date and time they happened. - - -#### Troubleshoot the `@timestamp` field [observability-parse-log-data-troubleshoot-the-timestamp-field] - -Check the following common issues and solutions with timestamps: - -* **Timestamp failure:** If your data has inconsistent date formats, set `ignore_failure` to `true` for your date processor. This processes logs with correctly formatted dates and ignores those with issues. -* **Incorrect timezone:** Set your timezone using the `timezone` option on the [date processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/date-processor.md). -* **Incorrect timestamp format:** Your timestamp can be a Java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. For more information on timestamp formats, refer to the [mapping date format](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-date-format.md). - - -### Extract the `log.level` field [observability-parse-log-data-extract-the-loglevel-field] - -Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -``` - -To extract and use the `log.level` field: - -1. [Add the `log.level` field to the dissect processor pattern in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-loglevel-to-your-ingest-pipeline) -2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api) -3. [Query your logs based on the `log.level` field.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-loglevel) - - -#### Add `log.level` to your ingest pipeline [observability-parse-log-data-add-loglevel-to-your-ingest-pipeline] - -Add the `%{log.level}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section with this command: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp and log level", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{message}" - } - } - ] -} -``` - -Now your pipeline will extract these fields: - -* The `@timestamp` field: `2023-08-08T13:45:12.123Z` -* The `log.level` field: `WARN` -* The `message` field: `192.168.1.101 Disk usage exceeds 90%.` - -In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section. - - -#### Test the pipeline with the simulate API [observability-parse-log-data-test-the-pipeline-with-the-simulate-api] - -Test that your ingest pipeline works as expected with the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate): - -```console -POST _ingest/pipeline/logs-example-default/_simulate -{ - "docs": [ - { - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - } - ] -} -``` - -The results should show the `@timestamp` and the `log.level` fields extracted from the `message` field: - -```json -{ - "docs": [ - { - "doc": { - "_index": "_index", - "_id": "_id", - "_version": "-3", - "_source": { - "message": "192.168.1.101 Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "@timestamp": "2023-8-08T13:45:12.123Z", - }, - ... - } - } - ] -} -``` - - -#### Query logs based on `log.level` [observability-parse-log-data-query-logs-based-on-loglevel] - -Once you’ve extracted the `log.level` field, you can query for high-severity logs like `WARN` and `ERROR`, which may need immediate attention, and filter out less critical `INFO` and `DEBUG` logs. - -Let’s say you have the following logs with varying severities: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. -``` - -Add them to your data stream using this command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } -{ "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } -{ "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } -``` - -Then, query for documents with a log level of `WARN` or `ERROR` with this command: - -```console -GET logs-example-default/_search -{ - "query": { - "terms": { - "log.level": ["WARN", "ERROR"] - } - } -} -``` - -The results should show only the high-severity logs: - -```json -{ -... - }, - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.08.14-000001", - "_id": "3TcZ-4kB3FafvEVY4yKx", - "_score": 1, - "_source": { - "message": "192.168.1.101 Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "@timestamp": "2023-08-08T13:45:12.123Z" - } - }, - { - "_index": ".ds-logs-example-default-2023.08.14-000001", - "_id": "3jcZ-4kB3FafvEVY4yKx", - "_score": 1, - "_source": { - "message": "192.168.1.103 Database connection failed.", - "log": { - "level": "ERROR" - }, - "@timestamp": "2023-08-08T13:45:14.003Z" - } - } - ] - } -} -``` - - -### Extract the `host.ip` field [observability-parse-log-data-extract-the-hostip-field] - -Extracting the `host.ip` field lets you filter logs by host IP addresses allowing you to focus on specific hosts that you’re having issues with or find disparities between hosts. - -The `host.ip` field is part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). Through the ECS, the `host.ip` field is mapped as an [`ip` field type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/ip.md). `ip` field types allow range queries so you can find logs with IP addresses in a specific range. You can also query `ip` field types using Classless Inter-Domain Routing (CIDR) notation to find logs from a particular network or subnet. - -This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. -``` - -To extract and use the `host.ip` field: - -1. [Add the `host.ip` field to your dissect processor in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-hostip-to-your-ingest-pipeline) -2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api) -3. [Query your logs based on the `host.ip` field.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-hostip) - - -#### Add `host.ip` to your ingest pipeline [observability-parse-log-data-add-hostip-to-your-ingest-pipeline] - -Add the `%{host.ip}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp log level and host ip", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" - } - } - ] -} -``` - -Your pipeline will extract these fields: - -* The `@timestamp` field: `2023-08-08T13:45:12.123Z` -* The `log.level` field: `WARN` -* The `host.ip` field: `192.168.1.101` -* The `message` field: `Disk usage exceeds 90%.` - -In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section. - - -#### Test the pipeline with the simulate API [observability-parse-log-data-test-the-pipeline-with-the-simulate-api-1] - -Test that your ingest pipeline works as expected with the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate): - -```console -POST _ingest/pipeline/logs-example-default/_simulate -{ - "docs": [ - { - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - } - ] -} -``` - -The results should show the `host.ip`, `@timestamp`, and `log.level` fields extracted from the `message` field: - -```json -{ - "docs": [ - { - "doc": { - ... - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - } - }, - ... - } - } - ] -} -``` - - -#### Query logs based on `host.ip` [observability-parse-log-data-query-logs-based-on-hostip] - -You can query your logs based on the `host.ip` field in different ways, including using CIDR notation and range queries. - -Before querying your logs, add them to your data stream using this command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } -{ "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } -{ "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } -``` - - -##### CIDR notation [observability-parse-log-data-cidr-notation] - -You can use [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation) to query your log data using a block of IP addresses that fall within a certain network segment. CIDR notations uses the format of `[IP address]/[prefix length]`. The following command queries IP addresses in the `192.168.1.0/24` subnet meaning IP addresses from `192.168.1.0` to `192.168.1.255`. - -```console -GET logs-example-default/_search -{ - "query": { - "term": { - "host.ip": "192.168.1.0/24" - } - } -} -``` - -Because all of the example logs are in this range, you’ll get the following results: - -```json -{ - ... - }, - "hits": { - ... - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "ak4oAIoBl7fe5ItIixuB", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "a04oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.103" - }, - "@timestamp": "2023-08-08T13:45:14.003Z", - "message": "Database connection failed.", - "log": { - "level": "ERROR" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "bE4oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.104" - }, - "@timestamp": "2023-08-08T13:45:15.004Z", - "message": "Debugging connection issue.", - "log": { - "level": "DEBUG" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "bU4oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.102" - }, - "@timestamp": "2023-08-08T13:45:16.005Z", - "message": "User changed profile picture.", - "log": { - "level": "INFO" - } - } - } - ] - } -} -``` - - -##### Range queries [observability-parse-log-data-range-queries] - -Use [range queries](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-range-query.md) to query logs in a specific range. - -The following command searches for IP addresses greater than or equal to `192.168.1.100` and less than or equal to `192.168.1.102`. - -```console -GET logs-example-default/_search -{ - "query": { - "range": { - "host.ip": { - "gte": "192.168.1.100", <1> - "lte": "192.168.1.102" <2> - } - } - } -} -``` - -1. Greater than or equal to `192.168.1.100`. -2. Less than or equal to `192.168.1.102`. - - -You’ll get the following results only showing logs in the range you’ve set: - -```json -{ - ... - }, - "hits": { - ... - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "ak4oAIoBl7fe5ItIixuB", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "bU4oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.102" - }, - "@timestamp": "2023-08-08T13:45:16.005Z", - "message": "User changed profile picture.", - "log": { - "level": "INFO" - } - } - } - ] - } -} -``` - - -## Reroute log data to specific data streams [observability-parse-log-data-reroute-log-data-to-specific-data-streams] - -By default, an ingest pipeline sends your log data to a single data stream. To simplify log data management, use a [reroute processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/reroute-processor.md) to route data from the generic data stream to a target data stream. For example, you might want to send high-severity logs to a specific data stream to help with categorization. - -This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. -``` - -::::{note} -When routing data to different data streams, we recommend picking a field with a limited number of distinct values to prevent an excessive increase in the number of data streams. For more details, refer to the [Size your shards](../../../deploy-manage/production-guidance/optimize-performance/size-shards.md) documentation. - -:::: - - -To use a reroute processor: - -1. [Add a reroute processor to your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline) -2. [Add the example logs to your data stream.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-logs-to-a-data-stream) -3. [Query your logs and verify the high-severity logs were routed to the new data stream.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-verify-the-reroute-processor-worked) - - -### Add a reroute processor to the ingest pipeline [observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline] - -Add a reroute processor to your ingest pipeline with the following command: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts fields and reroutes WARN", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" - } - }, - { - "reroute": { - "tag": "high_severity_logs", - "if" : "ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'", - "dataset": "critical" - } - } - ] -} -``` - -The previous command sets the following values for your reroute processor: - -* `tag`: Identifier for the processor that you can use for debugging and metrics. In the example, the tag is set to `high_severity_logs`. -* `if`: Conditionally runs the processor. In the example, `"ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",` means the processor runs when the `log.level` field is `WARN` or `ERROR`. -* `dataset`: the data stream dataset to route your document to if the previous condition is `true`. In the example, logs with a `log.level` of `WARN` or `ERROR` are routed to the `logs-critical-default` data stream. - -In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section. - - -### Add logs to a data stream [observability-parse-log-data-add-logs-to-a-data-stream] - -Add the example logs to your data stream with this command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } -{ "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } -{ "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } -``` - - -### Verify the reroute processor worked [observability-parse-log-data-verify-the-reroute-processor-worked] - -The reroute processor should route any logs with a `log.level` of `WARN` or `ERROR` to the `logs-critical-default` data stream. Query the data stream using the following command to verify the log data was routed as intended: - -```console -GET logs-critical-default/_search -``` - -Your should see similar results to the following showing that the high-severity logs are now in the `critical` dataset: - -```json -{ - ... - "hits": { - ... - "hits": [ - ... - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "data_stream": { - "namespace": "default", - "type": "logs", - "dataset": "critical" - }, - { - ... - "_source": { - "host": { - "ip": "192.168.1.103" - }, - "@timestamp": "2023-08-08T13:45:14.003Z", - "message": "Database connection failed.", - "log": { - "level": "ERROR" - }, - "data_stream": { - "namespace": "default", - "type": "logs", - "dataset": "critical" - } - } - } - ] - } -} -``` diff --git a/raw-migrated-files/observability-docs/observability/logs-parse.md b/raw-migrated-files/observability-docs/observability/logs-parse.md deleted file mode 100644 index e5fa976db7..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-parse.md +++ /dev/null @@ -1,850 +0,0 @@ -# Parse and organize logs [logs-parse] - -If your log data is unstructured or semi-structured, you can parse it and break it into meaningful fields. You can use those fields to explore and analyze your data. For example, you can find logs within a specific timestamp range or filter logs by log level to focus on potential issues. - -After parsing, you can use the structured fields to further organize your logs by configuring a reroute processor to send specific logs to different target data streams. - -Refer to the following sections for more on parsing and organizing your log data: - -* [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-parse): Extract structured fields like timestamps, log levels, or IP addresses to make querying and filtering your data easier. -* [Reroute log data to specific data streams](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-reroute): Route data from the generic data stream to a target data stream for more granular control over data retention, permissions, and processing. - - -## Extract structured fields [logs-stream-parse] - -Make your logs more useful by extracting structured fields from your unstructured log data. Extracting structured fields makes it easier to search, analyze, and filter your log data. - -Follow the steps below to see how the following unstructured log data is indexed by default: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -``` - -Start by storing the document in the `logs-example-default` data stream: - -1. To open **Console**, find `Dev Tools` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. In the **Console** tab, add the example log to {{es}} using the following command: - - ```console - POST logs-example-default/_doc - { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - ``` - -3. Then, you can retrieve the document with the following search: - - ```console - GET /logs-example-default/_search - ``` - - -The results should look like this: - -```JSON -{ - ... - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.08.09-000001", - ... - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-09T17:19:27.73312243Z" - } - } - ] - } -} -``` - -{{es}} indexes the `message` field by default and adds a `@timestamp` field. Since there was no timestamp set, it’s set to `now`. At this point, you can search for phrases in the `message` field like `WARN` or `Disk usage exceeds`. For example, use the following command to search for the phrase `WARN` in the log’s `message` field: - -```console -GET logs-example-default/_search -{ - "query": { - "match": { - "message": { - "query": "WARN" - } - } - } -} -``` - -While you can search for phrases in the `message` field, you can’t use this field to filter log data. Your message, however, contains all of the following potential fields you can extract and use to filter and aggregate your log data: - -* **@timestamp** (`2023-08-08T13:45:12.123Z`): Extracting this field lets you sort logs by date and time. This is helpful when you want to view your logs in the order that they occurred or identify when issues happened. -* **log.level** (`WARN`): Extracting this field lets you filter logs by severity. This is helpful if you want to focus on high-severity WARN or ERROR-level logs, and reduce noise by filtering out low-severity INFO-level logs. -* **host.ip** (`192.168.1.101`): Extracting this field lets you filter logs by the host IP addresses. This is helpful if you want to focus on specific hosts that you’re having issues with or if you want to find disparities between hosts. -* **message** (`Disk usage exceeds 90%.`): You can search for phrases or words in the message field. - -::::{note} -These fields are part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). The ECS defines a common set of fields that you can use across Elasticsearch when storing data, including log and metric data. -:::: - - - -### Extract the `@timestamp` field [logs-stream-extract-timestamp] - -When you added the log to {{es}} in the previous section, the `@timestamp` field showed when the log was added. The timestamp showing when the log actually occurred was in the unstructured `message` field: - -```JSON - ... - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.",<1> - "@timestamp": "2023-08-09T17:19:27.73312243Z"<2> - } - ... -``` - -1. The timestamp in the `message` field shows when the log occurred. -2. The timestamp in the `@timestamp` field shows when the log was added to {{es}}. - - -When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to your project. To do this, extract the timestamp from the unstructured `message` field to the structured `@timestamp` field by completing the following: - -1. [Use an ingest pipeline to extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-ingest-pipeline) -2. [Test the pipeline with the simulate pipeline API](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-simulate-api) -3. [Configure a data stream with an index template](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-index-template) -4. [Create a data stream](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-create-data-stream) - - -#### Use an ingest pipeline to extract the `@timestamp` field [logs-stream-ingest-pipeline] - -Ingest pipelines consist of a series of processors that perform common transformations on incoming documents before they are indexed. To extract the `@timestamp` field from the example log, use an ingest pipeline with a dissect processor. The [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) extracts structured fields from unstructured log messages based on a pattern you set. - -{{es}} can parse string timestamps that are in `yyyy-MM-dd'T'HH:mm:ss.SSSZ` and `yyyy-MM-dd` formats into date fields. Since the log example’s timestamp is in one of these formats, you don’t need additional processors. More complex or nonstandard timestamps require a [date processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/date-processor.md) to parse the timestamp into a date field. - -Use the following command to extract the timestamp from the `message` field into the `@timestamp` field: - -```console -PUT _ingest/pipeline/logs-example-default<1> -{ - "description": "Extracts the timestamp", - "processors": [ - { - "dissect": { - "field": "message",<2> - "pattern": "%{@timestamp} %{message}"<3> - } - } - ] -} -``` - -1. The name of the pipeline,`logs-example-default`, needs to match the name of your data stream. You’ll set up your data stream in the next section. For more information, refer to the [data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). -2. The field you’re extracting data from, `message` in this case. -3. The pattern of the elements in your log data. The `%{@timestamp} %{{message}}` pattern extracts the timestamp, `2023-08-08T13:45:12.123Z`, to the `@timestamp` field, while the rest of the message, `WARN 192.168.1.101 Disk usage exceeds 90%.`, stays in the `message` field. The dissect processor looks for the space as a separator defined by the pattern. - - - -#### Test the pipeline with the simulate pipeline API [logs-stream-simulate-api] - -The [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) runs the ingest pipeline without storing any documents. This lets you verify your pipeline works using multiple documents. Run the following command to test your ingest pipeline with the simulate pipeline API. - -```console -POST _ingest/pipeline/logs-example-default/_simulate -{ - "docs": [ - { - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - } - ] -} -``` - -The results should show the `@timestamp` field extracted from the `message` field: - -```console -{ - "docs": [ - { - "doc": { - "_index": "_index", - "_id": "_id", - "_version": "-3", - "_source": { - "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-08T13:45:12.123Z" - }, - ... - } - } - ] -} -``` - -::::{note} -Make sure you’ve created the ingest pipeline using the `PUT` command in the previous section before using the simulate pipeline API. -:::: - - - -#### Configure a data stream with an index template [logs-stream-index-template] - -After creating your ingest pipeline, run the following command to create an index template to configure your data stream’s backing indices: - -```console -PUT _index_template/logs-example-default-template -{ - "index_patterns": [ "logs-example-*" ],<1> - "data_stream": { },<2> - "priority": 500,<3> - "template": { - "settings": { - "index.default_pipeline":"logs-example-default"<4> - } - }, - "composed_of": [<5> - "logs@mappings", - "logs@settings", - "logs@custom", - "ecs@mappings" - ], - "ignore_missing_component_templates": ["logs@custom"] -} -``` - -1. `index_pattern`: Needs to match your log data stream. Naming conventions for data streams are `--`. In this example, your logs data stream is named `logs-example-*`. Data that matches this pattern will go through your pipeline. -2. `data_stream`: Enables data streams. -3. `priority`: Sets the priority of you Index Template. Index templates with higher priority take precedence over lower priority. If a data stream matches multiple index templates, {{es}} uses the template with the higher priority. Built-in templates have a priority of `200`, so use a priority higher than `200` for custom templates. -4. `index.default_pipeline`: The name of your ingest pipeline. `logs-example-default` in this case. -5. `composed_of`: Here you can set component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. Elastic has several built-in templates to help when ingesting your log data. - - -The example index template above sets the following component templates: - -* `logs@mappings`: general mappings for log data streams that include disabling automatic date detection from `string` fields and specifying mappings for [`data_stream` ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-data_stream.md). -* `logs@settings`: general settings for log data streams including the following: - - * The default lifecycle policy that rolls over when the primary shard reaches 50 GB or after 30 days. - * The default pipeline uses the ingest timestamp if there is no specified `@timestamp` and places a hook for the `logs@custom` pipeline. If a `logs@custom` pipeline is installed, it’s applied to logs ingested into this data stream. - * Sets the [`ignore_malformed`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/ignore-malformed.md) flag to `true`. When ingesting a large batch of log data, a single malformed field like an IP address can cause the entire batch to fail. When set to true, malformed fields with a mapping type that supports this flag are still processed. - -* `logs@custom`: a predefined component template that is not installed by default. Use this name to install a custom component template to override or extend any of the default mappings or settings. -* `ecs@mappings`: dynamic templates that automatically ensure your data stream mappings comply with the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). - - -#### Create a data stream [logs-stream-create-data-stream] - -Create your data stream using the [data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). Name your data stream to match the name of your ingest pipeline, `logs-example-default` in this case. Post the example log to your data stream with this command: - -```console -POST logs-example-default/_doc -{ - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." -} -``` - -View your documents using this command: - -```console -GET /logs-example-default/_search -``` - -You should see the pipeline has extracted the `@timestamp` field: - -```JSON -{ -... -{ - ... - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.08.09-000001", - "_id": "RsWy3IkB8yCtA5VGOKLf", - "_score": 1, - "_source": { - "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", - "@timestamp": "2023-08-08T13:45:12.123Z"<1> - } - } - ] - } -} -``` - -1. The extracted `@timestamp` field. - - -You can now use the `@timestamp` field to sort your logs by the date and time they happened. - - -#### Troubleshoot the `@timestamp` field [logs-stream-timestamp-troubleshooting] - -Check the following common issues and solutions with timestamps: - -* **Timestamp failure**: If your data has inconsistent date formats, set `ignore_failure` to `true` for your date processor. This processes logs with correctly formatted dates and ignores those with issues. -* **Incorrect timezone**: Set your timezone using the `timezone` option on the [date processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/date-processor.md). -* **Incorrect timestamp format**: Your timestamp can be a Java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. For more information on timestamp formats, refer to the [mapping date format](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-date-format.md). - - -### Extract the `log.level` field [logs-stream-extract-log-level] - -Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -``` - -To extract and use the `log.level` field: - -1. [Add the `log.level` field to the dissect processor pattern in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-log-level-pipeline) -2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-log-level-simulate) -3. [Query your logs based on the `log.level` field.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-log-level-query) - - -#### Add `log.level` to your ingest pipeline [logs-stream-log-level-pipeline] - -Add the `%{log.level}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-ingest-pipeline) section with this command: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp and log level", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{message}" - } - } - ] -} -``` - -Now your pipeline will extract these fields: - -* The `@timestamp` field: `2023-08-08T13:45:12.123Z` -* The `log.level` field: `WARN` -* The `message` field: `192.168.1.101 Disk usage exceeds 90%.` - -In addition to setting an ingest pipeline, you need to set an index template. You can use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-index-template) section. - - -#### Test the pipeline with the simulate API [logs-stream-log-level-simulate] - -Test that your ingest pipeline works as expected with the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate): - -```console -POST _ingest/pipeline/logs-example-default/_simulate -{ - "docs": [ - { - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - } - ] -} -``` - -The results should show the `@timestamp` and the `log.level` fields extracted from the `message` field: - -```JSON -{ - "docs": [ - { - "doc": { - "_index": "_index", - "_id": "_id", - "_version": "-3", - "_source": { - "message": "192.168.1.101 Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "@timestamp": "2023-8-08T13:45:12.123Z", - }, - ... - } - } - ] -} -``` - - -#### Query logs based on `log.level` [logs-stream-log-level-query] - -Once you’ve extracted the `log.level` field, you can query for high-severity logs like `WARN` and `ERROR`, which may need immediate attention, and filter out less critical `INFO` and `DEBUG` logs. - -Let’s say you have the following logs with varying severities: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. -``` - -Add them to your data stream using this command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } -{ "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } -{ "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } -``` - -Then, query for documents with a log level of `WARN` or `ERROR` with this command: - -```console -GET logs-example-default/_search -{ - "query": { - "terms": { - "log.level": ["WARN", "ERROR"] - } - } -} -``` - -The results should show only the high-severity logs: - -```JSON -{ -... - }, - "hits": { - ... - "hits": [ - { - "_index": ".ds-logs-example-default-2023.08.14-000001", - "_id": "3TcZ-4kB3FafvEVY4yKx", - "_score": 1, - "_source": { - "message": "192.168.1.101 Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "@timestamp": "2023-08-08T13:45:12.123Z" - } - }, - { - "_index": ".ds-logs-example-default-2023.08.14-000001", - "_id": "3jcZ-4kB3FafvEVY4yKx", - "_score": 1, - "_source": { - "message": "192.168.1.103 Database connection failed.", - "log": { - "level": "ERROR" - }, - "@timestamp": "2023-08-08T13:45:14.003Z" - } - } - ] - } -} -``` - - -### Extract the `host.ip` field [logs-stream-extract-host-ip] - -Extracting the `host.ip` field lets you filter logs by host IP addresses allowing you to focus on specific hosts that you’re having issues with or find disparities between hosts. - -The `host.ip` field is part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). Through the ECS, the `host.ip` field is mapped as an [`ip` field type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/ip.md). `ip` field types allow range queries so you can find logs with IP addresses in a specific range. You can also query `ip` field types using Classless Inter-Domain Routing (CIDR) notation to find logs from a particular network or subnet. - -This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. -``` - -To extract and use the `host.ip` field: - -1. [Add the `host.ip` field to your dissect processor in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-host-ip-pipeline) -2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-host-ip-simulate) -3. [Query your logs based on the `host.ip` field.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-host-ip-query) - - -#### Add `host.ip` to your ingest pipeline [logs-stream-host-ip-pipeline] - -Add the `%{host.ip}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-ingest-pipeline) section: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts the timestamp log level and host ip", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" - } - } - ] -} -``` - -Your pipeline will extract these fields: - -* The `@timestamp` field: `2023-08-08T13:45:12.123Z` -* The `log.level` field: `WARN` -* The `host.ip` field: `192.168.1.101` -* The `message` field: `Disk usage exceeds 90%.` - -In addition to setting an ingest pipeline, you need to set an index template. You can use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-index-template) section. - - -#### Test the pipeline with the simulate API [logs-stream-host-ip-simulate] - -Test that your ingest pipeline works as expected with the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate): - -```console -POST _ingest/pipeline/logs-example-default/_simulate -{ - "docs": [ - { - "_source": { - "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." - } - } - ] -} -``` - -The results should show the `host.ip`, `@timestamp`, and `log.level` fields extracted from the `message` field: - -```JSON -{ - "docs": [ - { - "doc": { - ... - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - } - }, - ... - } - } - ] -} -``` - - -#### Query logs based on `host.ip` [logs-stream-host-ip-query] - -You can query your logs based on the `host.ip` field in different ways, including using CIDR notation and range queries. - -Before querying your logs, add them to your data stream using this command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } -{ "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } -{ "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } -``` - - -##### CIDR notation [logs-stream-ip-cidr] - -You can use [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation) to query your log data using a block of IP addresses that fall within a certain network segment. CIDR notations uses the format of `[IP address]/[prefix length]`. The following command queries IP addresses in the `192.168.1.0/24` subnet meaning IP addresses from `192.168.1.0` to `192.168.1.255`. - -```console -GET logs-example-default/_search -{ - "query": { - "term": { - "host.ip": "192.168.1.0/24" - } - } -} -``` - -Because all of the example logs are in this range, you’ll get the following results: - -```JSON -{ - ... - }, - "hits": { - ... - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "ak4oAIoBl7fe5ItIixuB", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "a04oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.103" - }, - "@timestamp": "2023-08-08T13:45:14.003Z", - "message": "Database connection failed.", - "log": { - "level": "ERROR" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "bE4oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.104" - }, - "@timestamp": "2023-08-08T13:45:15.004Z", - "message": "Debugging connection issue.", - "log": { - "level": "DEBUG" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "bU4oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.102" - }, - "@timestamp": "2023-08-08T13:45:16.005Z", - "message": "User changed profile picture.", - "log": { - "level": "INFO" - } - } - } - ] - } -} -``` - - -##### Range queries [logs-stream-range-query] - -Use [range queries](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-range-query.md) to query logs in a specific range. - -The following command searches for IP addresses greater than or equal to `192.168.1.100` and less than or equal to `192.168.1.102`. - -```console -GET logs-example-default/_search -{ - "query": { - "range": { - "host.ip": { - "gte": "192.168.1.100",<1> - "lte": "192.168.1.102"<2> - } - } - } -} -``` - -1. Greater than or equal to `192.168.1.100`. -2. Less than or equal to `192.168.1.102`. - - -You’ll get the following results only showing logs in the range you’ve set: - -```JSON -{ - ... - }, - "hits": { - ... - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "ak4oAIoBl7fe5ItIixuB", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - } - } - }, - { - "_index": ".ds-logs-example-default-2023.08.16-000001", - "_id": "bU4oAIoBl7fe5ItIixuC", - "_score": 1, - "_source": { - "host": { - "ip": "192.168.1.102" - }, - "@timestamp": "2023-08-08T13:45:16.005Z", - "message": "User changed profile picture.", - "log": { - "level": "INFO" - } - } - } - ] - } -} -``` - - -## Reroute log data to specific data streams [logs-stream-reroute] - -By default, an ingest pipeline sends your log data to a single data stream. To simplify log data management, use a [reroute processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/reroute-processor.md) to route data from the generic data stream to a target data stream. For example, you might want to send high-severity logs to a specific data stream to help with categorization. - -This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream: - -```txt -2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. -2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. -2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. -2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. -``` - -::::{note} -When routing data to different data streams, we recommend picking a field with a limited number of distinct values to prevent an excessive increase in the number of data streams. For more details, refer to the [Size your shards](../../../deploy-manage/production-guidance/optimize-performance/size-shards.md) documentation. -:::: - - -To use a reroute processor: - -1. [Add a reroute processor to your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-reroute-pipeline) -2. [Add the example logs to your data stream.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-reroute-add-logs) -3. [Query your logs and verify the high-severity logs were routed to the new data stream.](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-reroute-verify) - - -### Add a reroute processor to the ingest pipeline [logs-stream-reroute-pipeline] - -Add a reroute processor to your ingest pipeline with the following command: - -```console -PUT _ingest/pipeline/logs-example-default -{ - "description": "Extracts fields and reroutes WARN", - "processors": [ - { - "dissect": { - "field": "message", - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" - }, - "reroute": { - "tag": "high_severity_logs",<1> - "if" : "ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",<2> - "dataset": "critical"<3> - } - } - ] -} -``` - -1. `tag`: Identifier for the processor that you can use for debugging and metrics. In the example, the tag is set to `high_severity_logs`. -2. `if`: Conditionally runs the processor. In the example, `"ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",` means the processor runs when the `log.level` field is `WARN` or `ERROR`. -3. `dataset`: the data stream dataset to route your document to if the previous condition is `true`. In the example, logs with a `log.level` of `WARN` or `ERROR` are routed to the `logs-critical-default` data stream. - - -In addition to setting an ingest pipeline, you need to set an index template. You can use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-index-template) section. - - -### Add logs to a data stream [logs-stream-reroute-add-logs] - -Add the example logs to your data stream with this command: - -```console -POST logs-example-default/_bulk -{ "create": {} } -{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } -{ "create": {} } -{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } -{ "create": {} } -{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } -{ "create": {} } -{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } -``` - - -### Verify the reroute processor worked [logs-stream-reroute-verify] - -The reroute processor should route any logs with a `log.level` of `WARN` or `ERROR` to the `logs-critical-default` data stream. Query the the data stream using the following command to verify the log data was routed as intended: - -```console -GET logs-critical-default/_search -``` - -Your should see similar results to the following showing that the high-severity logs are now in the `critical` dataset: - -```JSON -{ - ... - "hits": { - ... - "hits": [ - ... - "_source": { - "host": { - "ip": "192.168.1.101" - }, - "@timestamp": "2023-08-08T13:45:12.123Z", - "message": "Disk usage exceeds 90%.", - "log": { - "level": "WARN" - }, - "data_stream": { - "namespace": "default", - "type": "logs", - "dataset": "critical" - }, - { - ... - "_source": { - "host": { - "ip": "192.168.1.103" - }, - "@timestamp": "2023-08-08T13:45:14.003Z", - "message": "Database connection failed.", - "log": { - "level": "ERROR" - }, - "data_stream": { - "namespace": "default", - "type": "logs", - "dataset": "critical" - } - } - } - ] - } -} -``` diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index f9b6b88496..0db5203b4a 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -228,7 +228,6 @@ toc: - file: docs-content/serverless/observability-get-started.md - file: docs-content/serverless/observability-log-monitoring.md - file: docs-content/serverless/observability-monitor-datasets.md - - file: docs-content/serverless/observability-parse-log-data.md - file: docs-content/serverless/observability-plaintext-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md - file: docs-content/serverless/project-and-management-settings.md @@ -464,7 +463,6 @@ toc: - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - - file: observability-docs/observability/logs-parse.md - file: observability-docs/observability/logs-plaintext.md - file: observability-docs/observability/logs-stream.md - file: observability-docs/observability/monitor-datasets.md diff --git a/solutions/observability/logs/parse-route-logs.md b/solutions/observability/logs/parse-route-logs.md index 0fa178f167..936038a813 100644 --- a/solutions/observability/logs/parse-route-logs.md +++ b/solutions/observability/logs/parse-route-logs.md @@ -4,71 +4,872 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-parse-log-data.html --- -# Parse and route logs +# Parse and route logs [observability-parse-log-data] -% What needs to be done: Align serverless/stateful +::::{admonition} Required role +:class: note -% Use migrated content from existing pages that map to this page: +**For Observability serverless projects**, the **Admin** role or higher is required to create ingest pipelines that parse and route logs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-parse.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-parse-log-data.md +:::: -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): -$$$logs-stream-create-data-stream$$$ +If your log data is unstructured or semi-structured, you can parse it and break it into meaningful fields. You can use those fields to explore and analyze your data. For example, you can find logs within a specific timestamp range or filter logs by log level to focus on potential issues. -$$$logs-stream-host-ip-pipeline$$$ +After parsing, you can use the structured fields to further organize your logs by configuring a reroute processor to send specific logs to different target data streams. -$$$logs-stream-host-ip-query$$$ +Refer to the following sections for more on parsing and organizing your log data: -$$$logs-stream-host-ip-simulate$$$ +* [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields): Extract structured fields like timestamps, log levels, or IP addresses to make querying and filtering your data easier. +* [Reroute log data to specific data streams](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-reroute-log-data-to-specific-data-streams): Route data from the generic data stream to a target data stream for more granular control over data retention, permissions, and processing. -$$$logs-stream-index-template$$$ -$$$logs-stream-ingest-pipeline$$$ +## Extract structured fields [observability-parse-log-data-extract-structured-fields] -$$$logs-stream-log-level-pipeline$$$ +Make your logs more useful by extracting structured fields from your unstructured log data. Extracting structured fields makes it easier to search, analyze, and filter your log data. -$$$logs-stream-log-level-query$$$ +Follow the steps below to see how the following unstructured log data is indexed by default: -$$$logs-stream-log-level-simulate$$$ +```txt +2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +``` -$$$logs-stream-parse$$$ +Start by storing the document in the `logs-example-default` data stream: -$$$logs-stream-reroute-add-logs$$$ +1. To open **Console**, find `Dev Tools` in the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +2. In the **Console** tab, add the example log to Elastic using the following command: -$$$logs-stream-reroute-pipeline$$$ + ```console + POST logs-example-default/_doc + { + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + } + ``` -$$$logs-stream-reroute-verify$$$ +3. Then, you can retrieve the document with the following search: -$$$logs-stream-reroute$$$ + ```console + GET /logs-example-default/_search + ``` -$$$logs-stream-simulate-api$$$ -$$$observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline$$$ +The results should look like this: -$$$observability-parse-log-data-add-hostip-to-your-ingest-pipeline$$$ +```json +{ + ... + "hits": { + ... + "hits": [ + { + "_index": ".ds-logs-example-default-2023.08.09-000001", + ... + "_source": { + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", + "@timestamp": "2023-08-09T17:19:27.73312243Z" + } + } + ] + } +} +``` -$$$observability-parse-log-data-add-loglevel-to-your-ingest-pipeline$$$ +Elastic indexes the `message` field by default and adds a `@timestamp` field. Since there was no timestamp set, it’s set to `now`. At this point, you can search for phrases in the `message` field like `WARN` or `Disk usage exceeds`. For example, run the following command to search for the phrase `WARN` in the log’s `message` field: -$$$observability-parse-log-data-add-logs-to-a-data-stream$$$ +```console +GET logs-example-default/_search +{ + "query": { + "match": { + "message": { + "query": "WARN" + } + } + } +} +``` -$$$observability-parse-log-data-configure-a-data-stream-with-an-index-template$$$ +While you can search for phrases in the `message` field, you can’t use this field to filter log data. Your message, however, contains all of the following potential fields you can extract and use to filter and aggregate your log data: -$$$observability-parse-log-data-create-a-data-stream$$$ +* **@timestamp** (`2023-08-08T13:45:12.123Z`): Extracting this field lets you sort logs by date and time. This is helpful when you want to view your logs in the order that they occurred or identify when issues happened. +* **log.level** (`WARN`): Extracting this field lets you filter logs by severity. This is helpful if you want to focus on high-severity WARN or ERROR-level logs, and reduce noise by filtering out low-severity INFO-level logs. +* **host.ip** (`192.168.1.101`): Extracting this field lets you filter logs by the host IP addresses. This is helpful if you want to focus on specific hosts that you’re having issues with or if you want to find disparities between hosts. +* **message** (`Disk usage exceeds 90%.`): You can search for phrases or words in the message field. -$$$observability-parse-log-data-extract-structured-fields$$$ +::::{note} +These fields are part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). The ECS defines a common set of fields that you can use across Elastic when storing data, including log and metric data. -$$$observability-parse-log-data-query-logs-based-on-hostip$$$ +:::: -$$$observability-parse-log-data-query-logs-based-on-loglevel$$$ -$$$observability-parse-log-data-reroute-log-data-to-specific-data-streams$$$ -$$$observability-parse-log-data-test-the-pipeline-with-the-simulate-api$$$ +### Extract the `@timestamp` field [observability-parse-log-data-extract-the-timestamp-field] -$$$observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api$$$ +When you added the log to Elastic in the previous section, the `@timestamp` field showed when the log was added. The timestamp showing when the log actually occurred was in the unstructured `message` field: -$$$observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field$$$ +```json + ... + "_source": { + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%.", <1> + "@timestamp": "2023-08-09T17:19:27.73312243Z" <2> + } + ... +``` -$$$observability-parse-log-data-verify-the-reroute-processor-worked$$$ \ No newline at end of file +1. The timestamp in the `message` field shows when the log occurred. +2. The timestamp in the `@timestamp` field shows when the log was added to Elastic. + + +When looking into issues, you want to filter for logs by when the issue occurred not when the log was added to Elastic. To do this, extract the timestamp from the unstructured `message` field to the structured `@timestamp` field by completing the following: + +1. [Use an ingest pipeline to extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) +2. [Test the pipeline with the simulate pipeline API](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api) +3. [Configure a data stream with an index template](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) +4. [Create a data stream](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-create-a-data-stream) + + +#### Use an ingest pipeline to extract the `@timestamp` field [observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field] + +Ingest pipelines consist of a series of processors that perform common transformations on incoming documents before they are indexed. To extract the `@timestamp` field from the example log, use an ingest pipeline with a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md). The dissect processor extracts structured fields from unstructured log messages based on a pattern you set. + +Elastic can parse string timestamps that are in `yyyy-MM-dd'T'HH:mm:ss.SSSZ` and `yyyy-MM-dd` formats into date fields. Since the log example’s timestamp is in one of these formats, you don’t need additional processors. More complex or nonstandard timestamps require a [date processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/date-processor.md) to parse the timestamp into a date field. + +Use the following command to extract the timestamp from the `message` field into the `@timestamp` field: + +```console +PUT _ingest/pipeline/logs-example-default +{ + "description": "Extracts the timestamp", + "processors": [ + { + "dissect": { + "field": "message", + "pattern": "%{@timestamp} %{message}" + } + } + ] +} +``` + +The previous command sets the following values for your ingest pipeline: + +* `_ingest/pipeline/logs-example-default`: The name of the pipeline,`logs-example-default`, needs to match the name of your data stream. You’ll set up your data stream in the next section. For more information, refer to the [data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). +* `field`: The field you’re extracting data from, `message` in this case. +* `pattern`: The pattern of the elements in your log data. The `%{@timestamp} %{{message}}` pattern extracts the timestamp, `2023-08-08T13:45:12.123Z`, to the `@timestamp` field, while the rest of the message, `WARN 192.168.1.101 Disk usage exceeds 90%.`, stays in the `message` field. The dissect processor looks for the space as a separator defined by the pattern. + + +#### Test the pipeline with the simulate pipeline API [observability-parse-log-data-test-the-pipeline-with-the-simulate-pipeline-api] + +The [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate) runs the ingest pipeline without storing any documents. This lets you verify your pipeline works using multiple documents. + +Run the following command to test your ingest pipeline with the simulate pipeline API. + +```console +POST _ingest/pipeline/logs-example-default/_simulate +{ + "docs": [ + { + "_source": { + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + } + } + ] +} +``` + +The results should show the `@timestamp` field extracted from the `message` field: + +```console +{ + "docs": [ + { + "doc": { + "_index": "_index", + "_id": "_id", + "_version": "-3", + "_source": { + "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", + "@timestamp": "2023-08-08T13:45:12.123Z" + }, + ... + } + } + ] +} +``` + +::::{note} +Make sure you’ve created the ingest pipeline using the `PUT` command in the previous section before using the simulate pipeline API. + +:::: + + + +#### Configure a data stream with an index template [observability-parse-log-data-configure-a-data-stream-with-an-index-template] + +After creating your ingest pipeline, run the following command to create an index template to configure your data stream’s backing indices: + +```console +PUT _index_template/logs-example-default-template +{ + "index_patterns": [ "logs-example-*" ], + "data_stream": { }, + "priority": 500, + "template": { + "settings": { + "index.default_pipeline":"logs-example-default" + } + }, + "composed_of": [ + "logs@mappings", + "logs@settings", + "logs@custom", + "ecs@mappings" + ], + "ignore_missing_component_templates": ["logs@custom"] +} +``` + +The previous command sets the following values for your index template: + +* `index_pattern`: Needs to match your log data stream. Naming conventions for data streams are `--`. In this example, your logs data stream is named `logs-example-*`. Data that matches this pattern will go through your pipeline. +* `data_stream`: Enables data streams. +* `priority`: Sets the priority of your index templates. Index templates with a higher priority take precedence. If a data stream matches multiple index templates, Elastic uses the template with the higher priority. Built-in templates have a priority of `200`, so use a priority higher than `200` for custom templates. +* `index.default_pipeline`: The name of your ingest pipeline. `logs-example-default` in this case. +* `composed_of`: Here you can set component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. Elastic has several built-in templates to help when ingesting your log data. + +The example index template above sets the following component templates: + +* `logs@mappings`: general mappings for log data streams that include disabling automatic date detection from `string` fields and specifying mappings for [`data_stream` ECS fields](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-data_stream.md). +* `logs@settings`: general settings for log data streams including the following: + + * The default lifecycle policy that rolls over when the primary shard reaches 50 GB or after 30 days. + * The default pipeline uses the ingest timestamp if there is no specified `@timestamp` and places a hook for the `logs@custom` pipeline. If a `logs@custom` pipeline is installed, it’s applied to logs ingested into this data stream. + * Sets the [`ignore_malformed`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/ignore-malformed.md) flag to `true`. When ingesting a large batch of log data, a single malformed field like an IP address can cause the entire batch to fail. When set to true, malformed fields with a mapping type that supports this flag are still processed. + * `logs@custom`: a predefined component template that is not installed by default. Use this name to install a custom component template to override or extend any of the default mappings or settings. + * `ecs@mappings`: dynamic templates that automatically ensure your data stream mappings comply with the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). + + + +#### Create a data stream [observability-parse-log-data-create-a-data-stream] + +Create your data stream using the [data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). Name your data stream to match the name of your ingest pipeline, `logs-example-default` in this case. Post the example log to your data stream with this command: + +```console +POST logs-example-default/_doc +{ + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." +} +``` + +View your documents using this command: + +```console +GET /logs-example-default/_search +``` + +You should see the pipeline has extracted the `@timestamp` field: + +```json +{ + ... + { + ... + "hits": { + ... + "hits": [ + { + "_index": ".ds-logs-example-default-2023.08.09-000001", + "_id": "RsWy3IkB8yCtA5VGOKLf", + "_score": 1, + "_source": { + "message": "WARN 192.168.1.101 Disk usage exceeds 90%.", + "@timestamp": "2023-08-08T13:45:12.123Z" <1> + } + } + ] + } + } +} +``` + +1. The extracted `@timestamp` field. + + +You can now use the `@timestamp` field to sort your logs by the date and time they happened. + + +#### Troubleshoot the `@timestamp` field [observability-parse-log-data-troubleshoot-the-timestamp-field] + +Check the following common issues and solutions with timestamps: + +* **Timestamp failure:** If your data has inconsistent date formats, set `ignore_failure` to `true` for your date processor. This processes logs with correctly formatted dates and ignores those with issues. +* **Incorrect timezone:** Set your timezone using the `timezone` option on the [date processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/date-processor.md). +* **Incorrect timestamp format:** Your timestamp can be a Java time pattern or one of the following formats: ISO8601, UNIX, UNIX_MS, or TAI64N. For more information on timestamp formats, refer to the [mapping date format](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-date-format.md). + + +### Extract the `log.level` field [observability-parse-log-data-extract-the-loglevel-field] + +Extracting the `log.level` field lets you filter by severity and focus on critical issues. This section shows you how to extract the `log.level` field from this example log: + +```txt +2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +``` + +To extract and use the `log.level` field: + +1. [Add the `log.level` field to the dissect processor pattern in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-loglevel-to-your-ingest-pipeline) +2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api) +3. [Query your logs based on the `log.level` field.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-loglevel) + + +#### Add `log.level` to your ingest pipeline [observability-parse-log-data-add-loglevel-to-your-ingest-pipeline] + +Add the `%{log.level}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section with this command: + +```console +PUT _ingest/pipeline/logs-example-default +{ + "description": "Extracts the timestamp and log level", + "processors": [ + { + "dissect": { + "field": "message", + "pattern": "%{@timestamp} %{log.level} %{message}" + } + } + ] +} +``` + +Now your pipeline will extract these fields: + +* The `@timestamp` field: `2023-08-08T13:45:12.123Z` +* The `log.level` field: `WARN` +* The `message` field: `192.168.1.101 Disk usage exceeds 90%.` + +In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section. + + +#### Test the pipeline with the simulate API [observability-parse-log-data-test-the-pipeline-with-the-simulate-api] + +Test that your ingest pipeline works as expected with the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate): + +```console +POST _ingest/pipeline/logs-example-default/_simulate +{ + "docs": [ + { + "_source": { + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + } + } + ] +} +``` + +The results should show the `@timestamp` and the `log.level` fields extracted from the `message` field: + +```json +{ + "docs": [ + { + "doc": { + "_index": "_index", + "_id": "_id", + "_version": "-3", + "_source": { + "message": "192.168.1.101 Disk usage exceeds 90%.", + "log": { + "level": "WARN" + }, + "@timestamp": "2023-8-08T13:45:12.123Z", + }, + ... + } + } + ] +} +``` + + +#### Query logs based on `log.level` [observability-parse-log-data-query-logs-based-on-loglevel] + +Once you’ve extracted the `log.level` field, you can query for high-severity logs like `WARN` and `ERROR`, which may need immediate attention, and filter out less critical `INFO` and `DEBUG` logs. + +Let’s say you have the following logs with varying severities: + +```txt +2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. +2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. +2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. +``` + +Add them to your data stream using this command: + +```console +POST logs-example-default/_bulk +{ "create": {} } +{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "create": {} } +{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } +{ "create": {} } +{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } +{ "create": {} } +{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } +``` + +Then, query for documents with a log level of `WARN` or `ERROR` with this command: + +```console +GET logs-example-default/_search +{ + "query": { + "terms": { + "log.level": ["WARN", "ERROR"] + } + } +} +``` + +The results should show only the high-severity logs: + +```json +{ +... + }, + "hits": { + ... + "hits": [ + { + "_index": ".ds-logs-example-default-2023.08.14-000001", + "_id": "3TcZ-4kB3FafvEVY4yKx", + "_score": 1, + "_source": { + "message": "192.168.1.101 Disk usage exceeds 90%.", + "log": { + "level": "WARN" + }, + "@timestamp": "2023-08-08T13:45:12.123Z" + } + }, + { + "_index": ".ds-logs-example-default-2023.08.14-000001", + "_id": "3jcZ-4kB3FafvEVY4yKx", + "_score": 1, + "_source": { + "message": "192.168.1.103 Database connection failed.", + "log": { + "level": "ERROR" + }, + "@timestamp": "2023-08-08T13:45:14.003Z" + } + } + ] + } +} +``` + + +### Extract the `host.ip` field [observability-parse-log-data-extract-the-hostip-field] + +Extracting the `host.ip` field lets you filter logs by host IP addresses allowing you to focus on specific hosts that you’re having issues with or find disparities between hosts. + +The `host.ip` field is part of the [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md). Through the ECS, the `host.ip` field is mapped as an [`ip` field type](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/ip.md). `ip` field types allow range queries so you can find logs with IP addresses in a specific range. You can also query `ip` field types using Classless Inter-Domain Routing (CIDR) notation to find logs from a particular network or subnet. + +This section shows you how to extract the `host.ip` field from the following example logs and query based on the extracted fields: + +```txt +2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. +2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. +2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. +``` + +To extract and use the `host.ip` field: + +1. [Add the `host.ip` field to your dissect processor in your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-hostip-to-your-ingest-pipeline) +2. [Test the pipeline with the simulate API.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-test-the-pipeline-with-the-simulate-api) +3. [Query your logs based on the `host.ip` field.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-query-logs-based-on-hostip) + + +#### Add `host.ip` to your ingest pipeline [observability-parse-log-data-add-hostip-to-your-ingest-pipeline] + +Add the `%{host.ip}` option to the dissect processor pattern in the ingest pipeline you created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-use-an-ingest-pipeline-to-extract-the-timestamp-field) section: + +```console +PUT _ingest/pipeline/logs-example-default +{ + "description": "Extracts the timestamp log level and host ip", + "processors": [ + { + "dissect": { + "field": "message", + "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" + } + } + ] +} +``` + +Your pipeline will extract these fields: + +* The `@timestamp` field: `2023-08-08T13:45:12.123Z` +* The `log.level` field: `WARN` +* The `host.ip` field: `192.168.1.101` +* The `message` field: `Disk usage exceeds 90%.` + +In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section. + + +#### Test the pipeline with the simulate API [observability-parse-log-data-test-the-pipeline-with-the-simulate-api-1] + +Test that your ingest pipeline works as expected with the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate): + +```console +POST _ingest/pipeline/logs-example-default/_simulate +{ + "docs": [ + { + "_source": { + "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." + } + } + ] +} +``` + +The results should show the `host.ip`, `@timestamp`, and `log.level` fields extracted from the `message` field: + +```json +{ + "docs": [ + { + "doc": { + ... + "_source": { + "host": { + "ip": "192.168.1.101" + }, + "@timestamp": "2023-08-08T13:45:12.123Z", + "message": "Disk usage exceeds 90%.", + "log": { + "level": "WARN" + } + }, + ... + } + } + ] +} +``` + + +#### Query logs based on `host.ip` [observability-parse-log-data-query-logs-based-on-hostip] + +You can query your logs based on the `host.ip` field in different ways, including using CIDR notation and range queries. + +Before querying your logs, add them to your data stream using this command: + +```console +POST logs-example-default/_bulk +{ "create": {} } +{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "create": {} } +{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } +{ "create": {} } +{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } +{ "create": {} } +{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } +``` + + +##### CIDR notation [observability-parse-log-data-cidr-notation] + +You can use [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation) to query your log data using a block of IP addresses that fall within a certain network segment. CIDR notations uses the format of `[IP address]/[prefix length]`. The following command queries IP addresses in the `192.168.1.0/24` subnet meaning IP addresses from `192.168.1.0` to `192.168.1.255`. + +```console +GET logs-example-default/_search +{ + "query": { + "term": { + "host.ip": "192.168.1.0/24" + } + } +} +``` + +Because all of the example logs are in this range, you’ll get the following results: + +```json +{ + ... + }, + "hits": { + ... + { + "_index": ".ds-logs-example-default-2023.08.16-000001", + "_id": "ak4oAIoBl7fe5ItIixuB", + "_score": 1, + "_source": { + "host": { + "ip": "192.168.1.101" + }, + "@timestamp": "2023-08-08T13:45:12.123Z", + "message": "Disk usage exceeds 90%.", + "log": { + "level": "WARN" + } + } + }, + { + "_index": ".ds-logs-example-default-2023.08.16-000001", + "_id": "a04oAIoBl7fe5ItIixuC", + "_score": 1, + "_source": { + "host": { + "ip": "192.168.1.103" + }, + "@timestamp": "2023-08-08T13:45:14.003Z", + "message": "Database connection failed.", + "log": { + "level": "ERROR" + } + } + }, + { + "_index": ".ds-logs-example-default-2023.08.16-000001", + "_id": "bE4oAIoBl7fe5ItIixuC", + "_score": 1, + "_source": { + "host": { + "ip": "192.168.1.104" + }, + "@timestamp": "2023-08-08T13:45:15.004Z", + "message": "Debugging connection issue.", + "log": { + "level": "DEBUG" + } + } + }, + { + "_index": ".ds-logs-example-default-2023.08.16-000001", + "_id": "bU4oAIoBl7fe5ItIixuC", + "_score": 1, + "_source": { + "host": { + "ip": "192.168.1.102" + }, + "@timestamp": "2023-08-08T13:45:16.005Z", + "message": "User changed profile picture.", + "log": { + "level": "INFO" + } + } + } + ] + } +} +``` + + +##### Range queries [observability-parse-log-data-range-queries] + +Use [range queries](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/query-dsl-range-query.md) to query logs in a specific range. + +The following command searches for IP addresses greater than or equal to `192.168.1.100` and less than or equal to `192.168.1.102`. + +```console +GET logs-example-default/_search +{ + "query": { + "range": { + "host.ip": { + "gte": "192.168.1.100", <1> + "lte": "192.168.1.102" <2> + } + } + } +} +``` + +1. Greater than or equal to `192.168.1.100`. +2. Less than or equal to `192.168.1.102`. + + +You’ll get the following results only showing logs in the range you’ve set: + +```json +{ + ... + }, + "hits": { + ... + { + "_index": ".ds-logs-example-default-2023.08.16-000001", + "_id": "ak4oAIoBl7fe5ItIixuB", + "_score": 1, + "_source": { + "host": { + "ip": "192.168.1.101" + }, + "@timestamp": "2023-08-08T13:45:12.123Z", + "message": "Disk usage exceeds 90%.", + "log": { + "level": "WARN" + } + } + }, + { + "_index": ".ds-logs-example-default-2023.08.16-000001", + "_id": "bU4oAIoBl7fe5ItIixuC", + "_score": 1, + "_source": { + "host": { + "ip": "192.168.1.102" + }, + "@timestamp": "2023-08-08T13:45:16.005Z", + "message": "User changed profile picture.", + "log": { + "level": "INFO" + } + } + } + ] + } +} +``` + + +## Reroute log data to specific data streams [observability-parse-log-data-reroute-log-data-to-specific-data-streams] + +By default, an ingest pipeline sends your log data to a single data stream. To simplify log data management, use a [reroute processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/reroute-processor.md) to route data from the generic data stream to a target data stream. For example, you might want to send high-severity logs to a specific data stream to help with categorization. + +This section shows you how to use a reroute processor to send the high-severity logs (`WARN` or `ERROR`) from the following example logs to a specific data stream and keep the regular logs (`DEBUG` and `INFO`) in the default data stream: + +```txt +2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%. +2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed. +2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue. +2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture. +``` + +::::{note} +When routing data to different data streams, we recommend picking a field with a limited number of distinct values to prevent an excessive increase in the number of data streams. For more details, refer to the [Size your shards](../../../deploy-manage/production-guidance/optimize-performance/size-shards.md) documentation. + +:::: + + +To use a reroute processor: + +1. [Add a reroute processor to your ingest pipeline.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline) +2. [Add the example logs to your data stream.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-add-logs-to-a-data-stream) +3. [Query your logs and verify the high-severity logs were routed to the new data stream.](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-verify-the-reroute-processor-worked) + + +### Add a reroute processor to the ingest pipeline [observability-parse-log-data-add-a-reroute-processor-to-the-ingest-pipeline] + +Add a reroute processor to your ingest pipeline with the following command: + +```console +PUT _ingest/pipeline/logs-example-default +{ + "description": "Extracts fields and reroutes WARN", + "processors": [ + { + "dissect": { + "field": "message", + "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" + } + }, + { + "reroute": { + "tag": "high_severity_logs", + "if" : "ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'", + "dataset": "critical" + } + } + ] +} +``` + +The previous command sets the following values for your reroute processor: + +* `tag`: Identifier for the processor that you can use for debugging and metrics. In the example, the tag is set to `high_severity_logs`. +* `if`: Conditionally runs the processor. In the example, `"ctx.log?.level == 'WARN' || ctx.log?.level == 'ERROR'",` means the processor runs when the `log.level` field is `WARN` or `ERROR`. +* `dataset`: the data stream dataset to route your document to if the previous condition is `true`. In the example, logs with a `log.level` of `WARN` or `ERROR` are routed to the `logs-critical-default` data stream. + +In addition to setting an ingest pipeline, you need to set an index template. Use the index template created in the [Extract the `@timestamp` field](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-configure-a-data-stream-with-an-index-template) section. + + +### Add logs to a data stream [observability-parse-log-data-add-logs-to-a-data-stream] + +Add the example logs to your data stream with this command: + +```console +POST logs-example-default/_bulk +{ "create": {} } +{ "message": "2023-08-08T13:45:12.123Z WARN 192.168.1.101 Disk usage exceeds 90%." } +{ "create": {} } +{ "message": "2023-08-08T13:45:14.003Z ERROR 192.168.1.103 Database connection failed." } +{ "create": {} } +{ "message": "2023-08-08T13:45:15.004Z DEBUG 192.168.1.104 Debugging connection issue." } +{ "create": {} } +{ "message": "2023-08-08T13:45:16.005Z INFO 192.168.1.102 User changed profile picture." } +``` + + +### Verify the reroute processor worked [observability-parse-log-data-verify-the-reroute-processor-worked] + +The reroute processor should route any logs with a `log.level` of `WARN` or `ERROR` to the `logs-critical-default` data stream. Query the data stream using the following command to verify the log data was routed as intended: + +```console +GET logs-critical-default/_search +``` + +Your should see similar results to the following showing that the high-severity logs are now in the `critical` dataset: + +```json +{ + ... + "hits": { + ... + "hits": [ + ... + "_source": { + "host": { + "ip": "192.168.1.101" + }, + "@timestamp": "2023-08-08T13:45:12.123Z", + "message": "Disk usage exceeds 90%.", + "log": { + "level": "WARN" + }, + "data_stream": { + "namespace": "default", + "type": "logs", + "dataset": "critical" + }, + { + ... + "_source": { + "host": { + "ip": "192.168.1.103" + }, + "@timestamp": "2023-08-08T13:45:14.003Z", + "message": "Database connection failed.", + "log": { + "level": "ERROR" + }, + "data_stream": { + "namespace": "default", + "type": "logs", + "dataset": "critical" + } + } + } + ] + } +} +``` \ No newline at end of file From 8c8e666aff41e74f2cdb99430eac8e94819348e1 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 15:09:35 -0600 Subject: [PATCH 07/23] add plaintext logs --- .../observability/logs-plaintext.md | 347 ----------------- raw-migrated-files/toc.yml | 1 - .../logs/plaintext-application-logs.md | 350 +++++++++++++++++- 3 files changed, 348 insertions(+), 350 deletions(-) delete mode 100644 raw-migrated-files/observability-docs/observability/logs-plaintext.md diff --git a/raw-migrated-files/observability-docs/observability/logs-plaintext.md b/raw-migrated-files/observability-docs/observability/logs-plaintext.md deleted file mode 100644 index c83517942f..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-plaintext.md +++ /dev/null @@ -1,347 +0,0 @@ -# Plaintext application logs [logs-plaintext] - -Ingest and parse plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration. - -Plaintext logs require some additional setup that structured logs do not require: - -* To search, filter, and aggregate effectively, you need to parse plaintext logs using an ingest pipeline to extract structured fields. Parsing is based on log format, so you might have to maintain different settings for different applications. -* To [correlate plaintext logs](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs), you need to inject IDs into log messages and parse them using an ingest pipeline. - -To ingest, parse, and correlate plaintext logs: - -1. Ingest plaintext logs with [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) or [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) and parse them before indexing with an ingest pipeline. -2. [Correlate plaintext logs with an {{apm-agent}}.](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs) -3. [View logs in Logs Explorer](../../../solutions/observability/logs/plaintext-application-logs.md#view-plaintext-logs) - - -## Ingest logs [ingest-plaintext-logs] - -Send application logs to {{es}} using one of the following shipping tools: - -* [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) A lightweight data shipper that sends log data to {{es}}. -* [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) A single agent for logs, metrics, security data, and threat prevention. Combined with Fleet, you can centrally manage {{agent}} policies and lifecycles directly from {{kib}}. - - -### Ingest logs with {{filebeat}} [ingest-plaintext-logs-with-filebeat] - -Follow these steps to ingest application logs with {{filebeat}}. - - -#### Step 1: Install {{filebeat}} [step-1-plaintext-install-filebeat] - -Install {{filebeat}} on the server you want to monitor by running the commands that align with your system: - -:::::::{tab-set} - -::::::{tab-item} DEB -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-darwin-x86_64.tar.gz -tar xzvf filebeat-9.0.0-beta1-darwin-x86_64.tar.gz -``` -:::::: - -::::::{tab-item} RPM -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz -tar xzvf filebeat-9.0.0-beta1-linux-x86_64.tar.gz -``` -:::::: - -::::::{tab-item} macOS -1. Download the {{filebeat}} Windows zip file: https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip[https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip] -2. Extract the contents of the zip file into `C:\Program Files`. -3. Rename the `filebeat-{{version}}-windows-x86_64` directory to `{{filebeat}}`. -4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). -5. From the PowerShell prompt, run the following commands to install {{filebeat}} as a Windows service: - - ```powershell - PS > cd 'C:\Program Files\{filebeat}' - PS C:\Program Files\{filebeat}> .\install-service-filebeat.ps1 - ``` - - -If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`. -:::::: - -::::::{tab-item} Linux -```sh -curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-amd64.deb -sudo dpkg -i filebeat-9.0.0-beta1-amd64.deb -``` -:::::: - -::::::{tab-item} Windows -```sh -curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-x86_64.rpm -sudo rpm -vi filebeat-9.0.0-beta1-x86_64.rpm -``` -:::::: - -::::::: - -#### Step 2: Connect to {{es}} [step-2-plaintext-connect-to-your-project] - -Connect to {{es}} using an API key to set up {{filebeat}}. Set the following information in the `filebeat.yml` file: - -```yaml -output.elasticsearch: - hosts: ["your-projects-elasticsearch-endpoint"] - api_key: "id:api_key" -``` - -1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu (![help icon](../../../images/observability-help-icon.png "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. -2. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using: - - ```console - POST /_security/api_key - { - "name": "filebeat_host001", - "role_descriptors": { - "filebeat_writer": { - "cluster": ["manage"], - "index": [ - { - "names": ["filebeat-*"], - "privileges": ["manage", "create_doc"] - } - ] - } - } - } - ``` - - Refer to [Grant access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md) for more information. - - - -#### Step 3: Configure {{filebeat}} [step-3-plaintext-configure-filebeat] - -Add the following configuration to your `filebeat.yaml` file to start collecting log data. - -```yaml -filebeat.inputs: -- type: filestream <1> - enabled: true - paths: /path/to/logs.log <2> -``` - -1. Reads lines from an active log file. -2. Paths that you want {{filebeat}} to crawl and fetch logs from. - - - -#### Step 4: Set up and start {{filebeat}} [step-4-plaintext-set-up-and-start-filebeat] - -{{filebeat}} comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets: - -From the {{filebeat}} installation directory, set the [index template](../../../manage-data/data-store/templates.md) by running the command that aligns with your system: - -:::::::{tab-set} - -::::::{tab-item} DEB -```sh -./filebeat setup -e -``` -:::::: - -::::::{tab-item} RPM -```sh -./filebeat setup -e -``` -:::::: - -::::::{tab-item} MacOS -```sh -PS > .\filebeat.exe setup -e -``` -:::::: - -::::::{tab-item} Linux -```sh -filebeat setup -e -``` -:::::: - -::::::{tab-item} Windows -```sh -filebeat setup -e -``` -:::::: - -::::::: -From the {{filebeat}} installation directory, start filebeat by running the command that aligns with your system: - -:::::::{tab-set} - -::::::{tab-item} DEB -```sh -sudo service filebeat start -``` - -::::{note} -If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. -:::: - - -Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). -:::::: - -::::::{tab-item} RPM -```sh -sudo service filebeat start -``` - -::::{note} -If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. -:::: - - -Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). -:::::: - -::::::{tab-item} MacOS -```sh -./filebeat -e -``` -:::::: - -::::::{tab-item} Linux -```sh -./filebeat -e -``` -:::::: - -::::::{tab-item} Windows -```sh -PS C:\Program Files\filebeat> Start-Service filebeat -``` - -By default, Windows log files are stored in `C:\ProgramData\filebeat\Logs`. -:::::: - -::::::: - -#### Step 5: Parse logs with an ingest pipeline [step-5-plaintext-parse-logs-with-an-ingest-pipeline] - -Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md)-compatible fields. - -Create an ingest pipeline that defines a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured ECS fields from your log messages. In your project, navigate to **Developer Tools** and using a command similar to the following example: - -```console -PUT _ingest/pipeline/filebeat* <1> -{ - "description": "Extracts the timestamp log level and host ip", - "processors": [ - { - "dissect": { <2> - "field": "message", <3> - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" <4> - } - } - ] -} -``` - -1. `_ingest/pipeline/filebeat*`: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to [Data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). -2. `processors.dissect`: Adds a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. -3. `field`: The field you’re extracting data from, `message` in this case. -4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}` is required. `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` - - -Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-parse) for more on using ingest pipelines to parse your log data. - -After creating your pipeline, specify the pipeline for filebeat in the `filebeat.yml` file: - -```yaml -output.elasticsearch: - hosts: ["your-projects-elasticsearch-endpoint"] - api_key: "id:api_key" - pipeline: "your-pipeline" <1> -``` - -1. Add the `pipeline` output and the name of your pipeline to the output. - - - -### Ingest logs with the {{agent}} [ingest-plaintext-logs-with-the-agent] - -Follow these steps to ingest and centrally manage your logs using {{agent}} and {{fleet}}. - - -#### Step 1: Add the custom logs integration to your project [step-1-plaintext-add-the-custom-logs-integration-to-your-project] - -To add the custom logs integration to your project: - -1. Find **Integrations** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). -2. Type `custom` in the search bar and select **Custom Logs**. -3. Click **Add Custom Logs**. -4. Click **Install {{agent}}** at the bottom of the page, and follow the instructions for your system to install the {{agent}}. -5. After installing the {{agent}}, configure the integration from the **Add Custom Logs integration** page. -6. Give your integration a meaningful name and description. -7. Add the **Log file path**. For example, `/var/log/your-logs.log`. -8. Give your agent policy a name. The agent policy defines the data your {{agent}} collects. -9. Save your integration to add it to your deployment. - - -#### Step 2: Add an ingest pipeline to your integration [step-2-plaintext-add-an-ingest-pipeline-to-your-integration] - -To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md)-compatible fields. - -1. From the custom logs integration, select **Integration policies** tab. -2. Select the integration policy you created in the previous section. -3. Click **Change defaults → Advanced options**. -4. Under **Ingest pipelines**, click **Add custom pipeline**. -5. Create an ingest pipeline with a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log messages. - - Click **Import processors** and add a similar JSON to the following example: - - ```JSON - { - "description": "Extracts the timestamp log level and host ip", - "processors": [ - { - "dissect": { <1> - "field": "message", <2> - "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" <3> - } - } - ] - } - ``` - - 1. `processors.dissect`: Adds a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. - 2. `field`: The field you’re extracting data from, `message` in this case. - 3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` - -6. Click **Create pipeline**. -7. Save and deploy your integration. - - -## Correlate logs [correlate-plaintext-logs] - -Correlate your application logs with trace events to: - -* view the context of a log and the parameters provided by a user -* view all logs belonging to a particular trace -* easily move between logs and traces when debugging application issues - -Log correlation works on two levels: - -* at service level: annotation with `service.name`, `service.version`, and `service.environment` allow you to link logs with APM services -* at trace level: annotation with `trace.id` and `transaction.id` allow you to link logs with traces - -Learn about correlating plaintext logs in the agent-specific ingestion guides: - -* [Go](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/logs.md) -* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-correlation-ids) -* [.NET](asciidocalypse://docs/apm-agent-dotnet/docs/reference/ingestion-tools/apm-agent-dotnet/logs.md) -* [Node.js](asciidocalypse://docs/apm-agent-nodejs/docs/reference/ingestion-tools/apm-agent-nodejs/logs.md) -* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-correlation-ids) -* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/logs.md) - - -## View logs [view-plaintext-logs] - -To view logs ingested by {{filebeat}}, go to **Discover** from the main menu and create a data view based on the `filebeat-*` index pattern. Refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md) for more information. - -To view logs ingested by {{agent}}, go to Logs Explorer by clicking **Explorer** under **Logs** from the {{observability}} main menu. Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 0db5203b4a..64623f0a78 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -463,7 +463,6 @@ toc: - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - - file: observability-docs/observability/logs-plaintext.md - file: observability-docs/observability/logs-stream.md - file: observability-docs/observability/monitor-datasets.md - file: observability-docs/observability/obs-ai-assistant.md diff --git a/solutions/observability/logs/plaintext-application-logs.md b/solutions/observability/logs/plaintext-application-logs.md index de5f34d311..724a23a563 100644 --- a/solutions/observability/logs/plaintext-application-logs.md +++ b/solutions/observability/logs/plaintext-application-logs.md @@ -4,13 +4,359 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-plaintext-application-logs.html --- -# Plaintext application logs +# Plaintext application logs [logs-plaintext] + +Ingest and parse plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration. + +Plaintext logs require some additional setup that structured logs do not require: + +* To search, filter, and aggregate effectively, you need to parse plaintext logs using an ingest pipeline to extract structured fields. Parsing is based on log format, so you might have to maintain different settings for different applications. +* To [correlate plaintext logs](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs), you need to inject IDs into log messages and parse them using an ingest pipeline. + +To ingest, parse, and correlate plaintext logs: + +1. Ingest plaintext logs with [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) or [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) and parse them before indexing with an ingest pipeline. +2. [Correlate plaintext logs with an {{apm-agent}}.](../../../solutions/observability/logs/plaintext-application-logs.md#correlate-plaintext-logs) +3. [View logs in Logs Explorer](../../../solutions/observability/logs/plaintext-application-logs.md#view-plaintext-logs) + + +## Ingest logs [ingest-plaintext-logs] + +Send application logs to {{es}} using one of the following shipping tools: + +* [{{filebeat}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-filebeat) A lightweight data shipper that sends log data to {{es}}. +* [{{agent}}](../../../solutions/observability/logs/plaintext-application-logs.md#ingest-plaintext-logs-with-the-agent) A single agent for logs, metrics, security data, and threat prevention. Combined with Fleet, you can centrally manage {{agent}} policies and lifecycles directly from {{kib}}. + + +### Ingest logs with {{filebeat}} [ingest-plaintext-logs-with-filebeat] + +Follow these steps to ingest application logs with {{filebeat}}. + + +#### Step 1: Install {{filebeat}} [step-1-plaintext-install-filebeat] + +Install {{filebeat}} on the server you want to monitor by running the commands that align with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-darwin-x86_64.tar.gz +tar xzvf filebeat-9.0.0-beta1-darwin-x86_64.tar.gz +``` +:::::: + +::::::{tab-item} RPM +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-linux-x86_64.tar.gz +tar xzvf filebeat-9.0.0-beta1-linux-x86_64.tar.gz +``` +:::::: + +::::::{tab-item} macOS +1. Download the {{filebeat}} Windows zip file: https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip[https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-windows-x86_64.zip] +2. Extract the contents of the zip file into `C:\Program Files`. +3. Rename the `filebeat-{{version}}-windows-x86_64` directory to `{{filebeat}}`. +4. Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). +5. From the PowerShell prompt, run the following commands to install {{filebeat}} as a Windows service: + + ```powershell + PS > cd 'C:\Program Files\{filebeat}' + PS C:\Program Files\{filebeat}> .\install-service-filebeat.ps1 + ``` + + +If script execution is disabled on your system, you need to set the execution policy for the current session to allow the script to run. For example: `PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1`. +:::::: + +::::::{tab-item} Linux +```sh +curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-amd64.deb +sudo dpkg -i filebeat-9.0.0-beta1-amd64.deb +``` +:::::: + +::::::{tab-item} Windows +```sh +curl -L -O https\://artifacts.elastic.co/downloads/beats/filebeat/filebeat-9.0.0-beta1-x86_64.rpm +sudo rpm -vi filebeat-9.0.0-beta1-x86_64.rpm +``` +:::::: + +::::::: + +#### Step 2: Connect to {{es}} [step-2-plaintext-connect-to-your-project] + +Connect to {{es}} using an API key to set up {{filebeat}}. Set the following information in the `filebeat.yml` file: + +```yaml +output.elasticsearch: + hosts: ["your-projects-elasticsearch-endpoint"] + api_key: "id:api_key" +``` + +1. Set the `hosts` to your deployment’s {{es}} endpoint. Copy the {{es}} endpoint from **Help menu (![help icon](../../../images/observability-help-icon.png "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. +2. From **Developer tools**, run the following command to create an API key that grants `manage` permissions for the `cluster` and the `filebeat-*` indices using: + + ```console + POST /_security/api_key + { + "name": "filebeat_host001", + "role_descriptors": { + "filebeat_writer": { + "cluster": ["manage"], + "index": [ + { + "names": ["filebeat-*"], + "privileges": ["manage", "create_doc"] + } + ] + } + } + } + ``` + + Refer to [Grant access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md) for more information. + + + +#### Step 3: Configure {{filebeat}} [step-3-plaintext-configure-filebeat] + +Add the following configuration to your `filebeat.yaml` file to start collecting log data. + +```yaml +filebeat.inputs: +- type: filestream <1> + enabled: true + paths: /path/to/logs.log <2> +``` + +1. Reads lines from an active log file. +2. Paths that you want {{filebeat}} to crawl and fetch logs from. + + + +#### Step 4: Set up and start {{filebeat}} [step-4-plaintext-set-up-and-start-filebeat] + +{{filebeat}} comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets: + +From the {{filebeat}} installation directory, set the [index template](../../../manage-data/data-store/templates.md) by running the command that aligns with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +./filebeat setup -e +``` +:::::: + +::::::{tab-item} RPM +```sh +./filebeat setup -e +``` +:::::: + +::::::{tab-item} MacOS +```sh +PS > .\filebeat.exe setup -e +``` +:::::: + +::::::{tab-item} Linux +```sh +filebeat setup -e +``` +:::::: + +::::::{tab-item} Windows +```sh +filebeat setup -e +``` +:::::: + +::::::: +From the {{filebeat}} installation directory, start filebeat by running the command that aligns with your system: + +:::::::{tab-set} + +::::::{tab-item} DEB +```sh +sudo service filebeat start +``` + +::::{note} +If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. +:::: + + +Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). +:::::: + +::::::{tab-item} RPM +```sh +sudo service filebeat start +``` + +::::{note} +If you use an `init.d` script to start Filebeat, you can’t specify command line flags (see [Command reference](https://www.elastic.co/guide/en/beats/filebeat/master/command-line-options.html)). To specify flags, start Filebeat in the foreground. +:::: + + +Also see [Filebeat and systemd](https://www.elastic.co/guide/en/beats/filebeat/master/running-with-systemd.html). +:::::: + +::::::{tab-item} MacOS +```sh +./filebeat -e +``` +:::::: + +::::::{tab-item} Linux +```sh +./filebeat -e +``` +:::::: + +::::::{tab-item} Windows +```sh +PS C:\Program Files\filebeat> Start-Service filebeat +``` + +By default, Windows log files are stored in `C:\ProgramData\filebeat\Logs`. +:::::: + +::::::: + +#### Step 5: Parse logs with an ingest pipeline [step-5-plaintext-parse-logs-with-an-ingest-pipeline] + +Use an ingest pipeline to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md)-compatible fields. + +Create an ingest pipeline that defines a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured ECS fields from your log messages. In your project, navigate to **Developer Tools** and using a command similar to the following example: + +```console +PUT _ingest/pipeline/filebeat* <1> +{ + "description": "Extracts the timestamp log level and host ip", + "processors": [ + { + "dissect": { <2> + "field": "message", <3> + "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" <4> + } + } + ] +} +``` + +1. `_ingest/pipeline/filebeat*`: The name of the pipeline. Update the pipeline name to match the name of your data stream. For more information, refer to [Data stream naming scheme](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md#data-streams-naming-scheme). +2. `processors.dissect`: Adds a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. +3. `field`: The field you’re extracting data from, `message` in this case. +4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}` is required. `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` + + +Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-parse) for more on using ingest pipelines to parse your log data. + +After creating your pipeline, specify the pipeline for filebeat in the `filebeat.yml` file: + +```yaml +output.elasticsearch: + hosts: ["your-projects-elasticsearch-endpoint"] + api_key: "id:api_key" + pipeline: "your-pipeline" <1> +``` + +1. Add the `pipeline` output and the name of your pipeline to the output. + + + +### Ingest logs with the {{agent}} [ingest-plaintext-logs-with-the-agent] + +Follow these steps to ingest and centrally manage your logs using {{agent}} and {{fleet}}. + + +#### Step 1: Add the custom logs integration to your project [step-1-plaintext-add-the-custom-logs-integration-to-your-project] + +To add the custom logs integration to your project: + +1. Find **Integrations** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +2. Type `custom` in the search bar and select **Custom Logs**. +3. Click **Add Custom Logs**. +4. Click **Install {{agent}}** at the bottom of the page, and follow the instructions for your system to install the {{agent}}. +5. After installing the {{agent}}, configure the integration from the **Add Custom Logs integration** page. +6. Give your integration a meaningful name and description. +7. Add the **Log file path**. For example, `/var/log/your-logs.log`. +8. Give your agent policy a name. The agent policy defines the data your {{agent}} collects. +9. Save your integration to add it to your deployment. + + +#### Step 2: Add an ingest pipeline to your integration [step-2-plaintext-add-an-ingest-pipeline-to-your-integration] + +To aggregate or search for information in plaintext logs, use an ingest pipeline with your integration to parse the contents of your logs into structured, [Elastic Common Schema (ECS)](asciidocalypse://docs/ecs/docs/reference/ecs/index.md)-compatible fields. + +1. From the custom logs integration, select **Integration policies** tab. +2. Select the integration policy you created in the previous section. +3. Click **Change defaults → Advanced options**. +4. Under **Ingest pipelines**, click **Add custom pipeline**. +5. Create an ingest pipeline with a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log messages. + + Click **Import processors** and add a similar JSON to the following example: + + ```JSON + { + "description": "Extracts the timestamp log level and host ip", + "processors": [ + { + "dissect": { <1> + "field": "message", <2> + "pattern": "%{@timestamp} %{log.level} %{host.ip} %{message}" <3> + } + } + ] + } + ``` + + 1. `processors.dissect`: Adds a [dissect processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/dissect-processor.md) to extract structured fields from your log message. + 2. `field`: The field you’re extracting data from, `message` in this case. + 3. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}`, `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` + +6. Click **Create pipeline**. +7. Save and deploy your integration. + + +## Correlate logs [correlate-plaintext-logs] + +Correlate your application logs with trace events to: + +* view the context of a log and the parameters provided by a user +* view all logs belonging to a particular trace +* easily move between logs and traces when debugging application issues + +Log correlation works on two levels: + +* at service level: annotation with `service.name`, `service.version`, and `service.environment` allow you to link logs with APM services +* at trace level: annotation with `trace.id` and `transaction.id` allow you to link logs with traces + +Learn about correlating plaintext logs in the agent-specific ingestion guides: + +* [Go](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/logs.md) +* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-correlation-ids) +* [.NET](asciidocalypse://docs/apm-agent-dotnet/docs/reference/ingestion-tools/apm-agent-dotnet/logs.md) +* [Node.js](asciidocalypse://docs/apm-agent-nodejs/docs/reference/ingestion-tools/apm-agent-nodejs/logs.md) +* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-correlation-ids) +* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/logs.md) + + +## View logs [view-plaintext-logs] + +To view logs ingested by {{filebeat}}, go to **Discover** from the main menu and create a data view based on the `filebeat-*` index pattern. Refer to [Create a data view](../../../explore-analyze/find-and-organize/data-views.md) for more information. + +To view logs ingested by {{agent}}, go to Logs Explorer by clicking **Explorer** under **Logs** from the {{observability}} main menu. Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for more information on viewing and filtering your logs in {{kib}}. + % What needs to be done: Align serverless/stateful % Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-plaintext.md % - [ ] ./raw-migrated-files/docs-content/serverless/observability-plaintext-application-logs.md % Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): From 1b0c1bdaa1214fc30108ff58e6eb7fcb2acba4ec Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 15:17:24 -0600 Subject: [PATCH 08/23] add stream logs --- .../observability-stream-log-files.md | 477 ------------------ .../observability/logs-stream.md | 369 -------------- raw-migrated-files/toc.yml | 2 - .../observability/logs/stream-any-log-file.md | 390 +++++++++++++- 4 files changed, 379 insertions(+), 859 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-stream-log-files.md delete mode 100644 raw-migrated-files/observability-docs/observability/logs-stream.md diff --git a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md deleted file mode 100644 index a334c67a80..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md +++ /dev/null @@ -1,477 +0,0 @@ -# Stream any log file [observability-stream-log-files] - -::::{admonition} Required role -:class: note - -The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). - -:::: - - -
-:::{image} ../../../images/serverless-logs-stream-logs-api-key-beats.png -:alt: logs stream logs api key beats -:class: screenshot -::: - -:::{image} ../../../images/serverless-log-copy-es-endpoint.png -:alt: Copy a project's Elasticsearch endpoint -:class: screenshot -::: - -
-This guide shows you how to send a log file to your Observability project using a standalone {{agent}} and configure the {{agent}} and your data streams using the `elastic-agent.yml` file, and query your logs using the data streams you’ve set up. - -The quickest way to get started is using the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information. - -To install and configure the {{agent}} manually, refer to [Manually install and configure the standalone {{agent}}](../../../solutions/observability/logs/stream-any-log-file.md#manually-install-agent-logs). - - -## Manually install and configure the standalone {{agent}} [manually-install-agent-logs] - -If you’re not using the guided instructions, follow these steps to manually install and configure your the {{agent}}. - - -### Step 1: Download and extract the {{agent}} installation package [observability-stream-log-files-step-1-download-and-extract-the-agent-installation-package] - -On your host, download and extract the installation package that corresponds with your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-darwin-x86_64.tar.gz -tar xzvf elastic-agent-8.16.1-darwin-x86_64.tar.gz -``` -:::::: - -::::::{tab-item} Linux -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-linux-x86_64.tar.gz -tar xzvf elastic-agent-8.16.1-linux-x86_64.tar.gz -``` -:::::: - -::::::{tab-item} Windows -```powershell -# PowerShell 5.0+ -wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-windows-x86_64.zip -OutFile elastic-agent-8.16.1-windows-x86_64.zip -Expand-Archive .\elastic-agent-8.16.1-windows-x86_64.zip -``` - -Or manually: - -1. Download the {{agent}} Windows zip file from the [download page](https://www.elastic.co/downloads/beats/elastic-agent). -2. Extract the contents of the zip file. -:::::: - -::::::{tab-item} DEB -::::{important} -To simplify upgrading to future versions of {{agent}}, we recommended that you use the tarball distribution instead of the DEB distribution. - -:::: - - -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-amd64.deb -sudo dpkg -i elastic-agent-8.16.1-amd64.deb -``` -:::::: - -::::::{tab-item} RPM -::::{important} -To simplify upgrading to future versions of {{agent}}, we recommended that you use the tarball distribution instead of the RPM distribution. - -:::: - - -```sh -curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-x86_64.rpm -sudo rpm -vi elastic-agent-8.16.1-x86_64.rpm -``` -:::::: - -::::::: - -### Step 2: Install and start the {{agent}} [observability-stream-log-files-step-2-install-and-start-the-agent] - -After downloading and extracting the installation package, you’re ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: - -::::{note} -On macOS, Linux (tar package), and Windows, run the `install` command to install and start {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, For these systems, you must enable and start the service. - -:::: - - -:::::::{tab-set} - -::::::{tab-item} macOS -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. - -:::: - - -```shell -sudo ./elastic-agent install -``` -:::::: - -::::::{tab-item} Linux -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. - -:::: - - -```shell -sudo ./elastic-agent install -``` -:::::: - -::::::{tab-item} Windows -Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). - -From the PowerShell prompt, change to the directory where you installed {{agent}}, and run: - -```shell -.\elastic-agent.exe install -``` -:::::: - -::::::{tab-item} DEB -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. - -:::: - - -```shell -sudo systemctl enable elastic-agent <1> -sudo systemctl start elastic-agent -``` - -1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. -:::::: - -::::::{tab-item} RPM -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. - -:::: - - -```shell -sudo systemctl enable elastic-agent <1> -sudo systemctl start elastic-agent -``` - -1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. -:::::: - -::::::: -During installation, you’ll be prompted with some questions: - -1. When asked if you want to install the agent as a service, enter `Y`. -2. When asked if you want to enroll the agent in Fleet, enter `n`. - - -### Step 3: Configure the {{agent}} [observability-stream-log-files-step-3-configure-the-agent] - -After your agent is installed, configure it by updating the `elastic-agent.yml` file. - - -#### Locate your configuration file [observability-stream-log-files-locate-your-configuration-file] - -You’ll find the `elastic-agent.yml` in one of the following locations according to your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -Main {{agent}} configuration file location: - -`/Library/Elastic/Agent/elastic-agent.yml` -:::::: - -::::::{tab-item} Linux -Main {{agent}} configuration file location: - -`/opt/Elastic/Agent/elastic-agent.yml` -:::::: - -::::::{tab-item} Windows -Main {{agent}} configuration file location: - -`C:\Program Files\Elastic\Agent\elastic-agent.yml` -:::::: - -::::::{tab-item} DEB -Main {{agent}} configuration file location: - -`/etc/elastic-agent/elastic-agent.yml` -:::::: - -::::::{tab-item} RPM -Main {{agent}} configuration file location: - -`/etc/elastic-agent/elastic-agent.yml` -:::::: - -::::::: - -#### Update your configuration file [observability-stream-log-files-update-your-configuration-file] - -Update the default configuration in the `elastic-agent.yml` file manually. It should look something like this: - -```yaml -outputs: - default: - type: elasticsearch - hosts: ':' - api_key: 'your-api-key' -inputs: - - id: your-log-id - type: filestream - streams: - - id: your-log-stream-id - data_stream: - dataset: example - paths: - - /var/log/your-logs.log -``` - -You need to set the values for the following fields: - -`hosts` -: Copy the {{es}} endpoint from your project’s page and add the port (the default port is `443`). For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. - - If you’re following the guided instructions in your project, the {{es}} endpoint will be prepopulated in the configuration file. - - :::::{tip} - If you need to find your project’s {{es}} endpoint outside the guided instructions: - - 1. Go to the **Projects** page that lists all your projects. - 2. Click **Manage** next to the project you want to connect to. - 3. Click **View** next to *Endpoints*. - 4. Copy the *Elasticsearch endpoint*. - - :::{image} ../../../images/serverless-log-copy-es-endpoint.png - :alt: Copy a project's Elasticsearch endpoint - :class: screenshot - ::: - - ::::: - - -`api-key` -: Use an API key to grant the agent access to your project. The API key format should be `:`. - - If you’re following the guided instructions in your project, an API key will be autogenerated and will be prepopulated in the downloadable configuration file. - - If configuring the {{agent}} manually, create an API key: - - 1. Navigate to **Project settings** → **Management*** → ***API keys** and click **Create API key**. - 2. Select **Restrict privileges** and add the following JSON to give privileges for ingesting logs. - - ```json - { - "standalone_agent": { - "cluster": [ - "monitor" - ], - "indices": [ - { - "names": [ - "logs-*-*" - ], - "privileges": [ - "auto_configure", "create_doc" - ] - } - ] - } - } - ``` - - 3. You *must* set the API key to configure {{beats}}. Immediately after the API key is generated and while it is still being displayed, click the **Encoded** button next to the API key and select **Beats** from the list in the tooltip. Base64 encoded API keys are not currently supported in this configuration. - - :::{image} ../../../images/serverless-logs-stream-logs-api-key-beats.png - :alt: logs stream logs api key beats - :class: screenshot - ::: - - -`inputs.id` -: A unique identifier for your input. - -`type` -: The type of input. For collecting logs, set this to `filestream`. - -`streams.id` -: A unique identifier for your stream of log data. - -`data_stream.dataset` -: The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`. - -`paths` -: The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`. - - -#### Restart the {{agent}} [observability-stream-log-files-restart-the-agent] - -After updating your configuration file, you need to restart the {{agent}}. - -First, stop the {{agent}} and its related executables using the command that works with your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -```shell -sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. - -:::: -:::::: - -::::::{tab-item} Linux -```shell -sudo service elastic-agent stop -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. - -:::: -:::::: - -::::::{tab-item} Windows -```shell -Stop-Service Elastic Agent -``` - -If necessary, use Task Manager on Windows to stop {{agent}}. This will kill the `elastic-agent` process and any sub-processes it created (such as {{beats}}). - -::::{note} -{{agent}} will restart automatically if the system is rebooted. - -:::: -:::::: - -::::::{tab-item} DEB -The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to stop the agent: - -```shell -sudo systemctl stop elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent stop -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. - -:::: -:::::: - -::::::{tab-item} RPM -The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to stop the agent: - -```shell -sudo systemctl stop elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent stop -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. - -:::: -:::::: - -::::::: -Next, restart the {{agent}} using the command that works with your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -```shell -sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist -``` -:::::: - -::::::{tab-item} Linux -```shell -sudo service elastic-agent start -``` -:::::: - -::::::{tab-item} Windows -```shell -Start-Service Elastic Agent -``` -:::::: - -::::::{tab-item} DEB -The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to start the agent: - -```shell -sudo systemctl start elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent start -``` -:::::: - -::::::{tab-item} RPM -The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to start the agent: - -```shell -sudo systemctl start elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent start -``` -:::::: - -::::::: - -## Troubleshoot your {{agent}} configuration [observability-stream-log-files-troubleshoot-your-agent-configuration] - -If you’re not seeing your log files in your project, verify the following in the `elastic-agent.yml` file: - -* The path to your logs file under `paths` is correct. -* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you’ll need to create an API key in **Beats** format. - -If you’re still running into issues, refer to [{{agent}} troubleshooting](../../../troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md). - - -## Next steps [observability-stream-log-files-next-steps] - -After you have your agent configured and are streaming log data to your project: - -* Refer to the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. -* Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. diff --git a/raw-migrated-files/observability-docs/observability/logs-stream.md b/raw-migrated-files/observability-docs/observability/logs-stream.md deleted file mode 100644 index 10c95f42b9..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-stream.md +++ /dev/null @@ -1,369 +0,0 @@ -# Stream any log file [logs-stream] - -This guide shows you how to manually configure a standalone {{agent}} to send your log data to {{es}} using the `elastic-agent.yml` file. - -If you don’t want to manually configure the {{agent}}, you can use the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information. - -Continue with this guide for instructions on manual configuration. - - -## Prerequisites [logs-stream-prereq] - -To follow the steps in this guide, you need an {{stack}} deployment that includes: - -* {{es}} for storing and searching data -* {{kib}} for visualizing and managing data -* Kibana user with `All` privileges on {{fleet}} and Integrations. Since many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces. -* Integrations Server (included by default in every {{ess}} deployment) - -To get started quickly, spin up a deployment of our hosted {{ess}}. The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). - - -## Install and configure the standalone {{agent}} [logs-stream-install-config-agent] - -Complete these steps to install and configure the standalone {{agent}} and send your log data to {{es}}: - -1. [Download and extract the {{agent}} installation package.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-extract-agent) -2. [Install and start the {{agent}}.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-install-agent) -3. [Configure the {{agent}}.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-agent-config) - - -### Step 1: Download and extract the {{agent}} installation package [logs-stream-extract-agent] - -On your host, download and extract the installation package that corresponds with your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -Version 9.0.0-beta1 of {{agent}} has not yet been released. -:::::: - -::::::{tab-item} Linux -Version 9.0.0-beta1 of {{agent}} has not yet been released. -:::::: - -::::::{tab-item} Windows -Version 9.0.0-beta1 of {{agent}} has not yet been released. -:::::: - -::::::{tab-item} DEB -Version 9.0.0-beta1 of {{agent}} has not yet been released. -:::::: - -::::::{tab-item} RPM -Version 9.0.0-beta1 of {{agent}} has not yet been released. -:::::: - -::::::: - -### Step 2: Install and start the {{agent}} [logs-stream-install-agent] - -After downloading and extracting the installation package, you’re ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: - -::::{note} -On macOS, Linux (tar package), and Windows, run the `install` command to install and start {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, For these systems, you must enable and start the service. -:::: - - -:::::::{tab-set} - -::::::{tab-item} macOS -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: - - -```shell -sudo ./elastic-agent install -``` -:::::: - -::::::{tab-item} Linux -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: - - -```shell -sudo ./elastic-agent install -``` -:::::: - -::::::{tab-item} Windows -Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). - -From the PowerShell prompt, change to the directory where you installed {{agent}}, and run: - -```shell -.\elastic-agent.exe install -``` -:::::: - -::::::{tab-item} DEB -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: - - -```shell -sudo systemctl enable elastic-agent <1> -sudo systemctl start elastic-agent -``` - -1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. -:::::: - -::::::{tab-item} RPM -::::{tip} -You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: - - -```shell -sudo systemctl enable elastic-agent <1> -sudo systemctl start elastic-agent -``` - -1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. -:::::: - -::::::: -During installation, you’re prompted with some questions: - -1. When asked if you want to install the agent as a service, enter `Y`. -2. When asked if you want to enroll the agent in Fleet, enter `n`. - - -### Step 3: Configure the {{agent}} [logs-stream-agent-config] - -With your agent installed, configure it by updating the `elastic-agent.yml` file. - - -#### Locate your configuration file [logs-stream-yml-location] - -After installing the agent, you’ll find the `elastic-agent.yml` in one of the following locations according to your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -Main {{agent}} configuration file location: - -`/Library/Elastic/Agent/elastic-agent.yml` -:::::: - -::::::{tab-item} Linux -Main {{agent}} configuration file location: - -`/opt/Elastic/Agent/elastic-agent.yml` -:::::: - -::::::{tab-item} Windows -Main {{agent}} configuration file location: - -`C:\Program Files\Elastic\Agent\elastic-agent.yml` -:::::: - -::::::{tab-item} DEB -Main {{agent}} configuration file location: - -`/etc/elastic-agent/elastic-agent.yml` -:::::: - -::::::{tab-item} RPM -Main {{agent}} configuration file location: - -`/etc/elastic-agent/elastic-agent.yml` -:::::: - -::::::: - -#### Update your configuration file [logs-stream-example-config] - -The following is an example of a standalone {{agent}} configuration. To configure your {{agent}}, replace the contents of the `elastic-agent.yml` file with this configuration: - -```yaml -outputs: - default: - type: elasticsearch - hosts: ':' - api_key: 'your-api-key' -inputs: - - id: your-log-id - type: filestream - streams: - - id: your-log-stream-id - data_stream: - dataset: example - paths: - - /var/log/your-logs.log -``` - -Next, set the values for these fields: - -* `hosts` – Copy the {{es}} endpoint from **Help menu (![help icon](../../../images/observability-help-icon.png "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. -* `api-key` – Use an API key to grant the agent access to {{es}}. To create an API key for your agent, refer to the [Create API keys for standalone agents](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) documentation. - - ::::{note} - The API key format should be `:`. Make sure you selected **Beats** when you created your API key. Base64 encoded API keys are not currently supported in this configuration. - :::: - -* `inputs.id` – A unique identifier for your input. -* `type` – The type of input. For collecting logs, set this to `filestream`. -* `streams.id` – A unique identifier for your stream of log data. -* `data_stream.dataset` – The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`. -* `paths` – The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`. - - -#### Restart the {{agent}} [logs-stream-restart-agent] - -After updating your configuration file, you need to restart the {{agent}}: - -First, stop the {{agent}} and its related executables using the command that works with your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -```shell -sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. -:::: -:::::: - -::::::{tab-item} Linux -```shell -sudo service elastic-agent stop -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. -:::: -:::::: - -::::::{tab-item} Windows -```shell -Stop-Service Elastic Agent -``` - -If necessary, use Task Manager on Windows to stop {{agent}}. This will kill the `elastic-agent` process and any sub-processes it created (such as {{beats}}). - -::::{note} -{{agent}} will restart automatically if the system is rebooted. -:::: -:::::: - -::::::{tab-item} DEB -The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to stop the agent: - -```shell -sudo systemctl stop elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent stop -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. -:::: -:::::: - -::::::{tab-item} RPM -The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to stop the agent: - -```shell -sudo systemctl stop elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent stop -``` - -::::{note} -{{agent}} will restart automatically if the system is rebooted. -:::: -:::::: - -::::::: -Next, restart the {{agent}} using the command that works with your system: - -:::::::{tab-set} - -::::::{tab-item} macOS -```shell -sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist -``` -:::::: - -::::::{tab-item} Linux -```shell -sudo service elastic-agent start -``` -:::::: - -::::::{tab-item} Windows -```shell -Start-Service Elastic Agent -``` -:::::: - -::::::{tab-item} DEB -The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to start the agent: - -```shell -sudo systemctl start elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent start -``` -:::::: - -::::::{tab-item} RPM -The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. - -Use `systemctl` to start the agent: - -```shell -sudo systemctl start elastic-agent -``` - -Otherwise, use: - -```shell -sudo service elastic-agent start -``` -:::::: - -::::::: - -## Troubleshoot your {{agent}} configuration [logs-stream-troubleshooting] - -If you’re not seeing your log files in {{kib}}, verify the following in the `elastic-agent.yml` file: - -* The path to your logs file under `paths` is correct. -* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you’ll need to create an API key in **Beats** format. - -If you’re still running into issues, see [{{agent}} troubleshooting](../../../troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md). - - -## Next steps [logs-stream-next-steps] - -After you have your agent configured and are streaming log data to {{es}}: - -* Refer to the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. -* Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 64623f0a78..93dc3b8a39 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -229,7 +229,6 @@ toc: - file: docs-content/serverless/observability-log-monitoring.md - file: docs-content/serverless/observability-monitor-datasets.md - file: docs-content/serverless/observability-plaintext-application-logs.md - - file: docs-content/serverless/observability-stream-log-files.md - file: docs-content/serverless/project-and-management-settings.md - file: docs-content/serverless/project-setting-data.md - file: docs-content/serverless/project-settings-alerts.md @@ -463,7 +462,6 @@ toc: - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - - file: observability-docs/observability/logs-stream.md - file: observability-docs/observability/monitor-datasets.md - file: observability-docs/observability/obs-ai-assistant.md - file: observability-docs/observability/observability-get-started.md diff --git a/solutions/observability/logs/stream-any-log-file.md b/solutions/observability/logs/stream-any-log-file.md index 90b2ad9161..07a0eb465a 100644 --- a/solutions/observability/logs/stream-any-log-file.md +++ b/solutions/observability/logs/stream-any-log-file.md @@ -4,23 +4,391 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-stream-log-files.html --- -# Stream any log file +# Stream any log file [logs-stream] -% What needs to be done: Align serverless/stateful +This guide shows you how to manually configure a standalone {{agent}} to send your log data to {{es}} using the `elastic-agent.yml` file. -% Use migrated content from existing pages that map to this page: +To get started quickly without manually configuring the {{agent}}, you can use the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information. -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-stream.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-stream-log-files.md +Continue with this guide for instructions on manual configuration. -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): -$$$logs-stream-agent-config$$$ +## Prerequisites [logs-stream-prereq] -$$$observability-stream-log-files-step-3-configure-the-agent$$$ +::::{tab-set} +:group: stack-serverless -$$$logs-stream-extract-agent$$$ +:::{tab-item} Elastic Stack v9 +:sync: stack -$$$logs-stream-install-agent$$$ +To follow the steps in this guide, you need an {{stack}} deployment that includes: -$$$manually-install-agent-logs$$$ \ No newline at end of file +* {{es}} for storing and searching data +* {{kib}} for visualizing and managing data +* Kibana user with `All` privileges on {{fleet}} and Integrations. Since many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces. +* Integrations Server (included by default in every {{ess}} deployment) + +To get started quickly, spin up a deployment of our hosted {{ess}}. The {{ess}} is available on AWS, GCP, and Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). + + +::: + +:::{tab-item} Serverless +:sync: serverless + +The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). + +::: + +:::: + +## Install and configure the standalone {{agent}} [logs-stream-install-config-agent] + +Complete these steps to install and configure the standalone {{agent}} and send your log data to {{es}}: + +1. [Download and extract the {{agent}} installation package.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-extract-agent) +2. [Install and start the {{agent}}.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-install-agent) +3. [Configure the {{agent}}.](../../../solutions/observability/logs/stream-any-log-file.md#logs-stream-agent-config) + + +### Step 1: Download and extract the {{agent}} installation package [logs-stream-extract-agent] + +On your host, download and extract the installation package that corresponds with your system: + +% Stateful and Serverless Need to fix these tabs. + +:::::::{tab-set} + +::::::{tab-item} macOS +Version 9.0.0-beta1 of {{agent}} has not yet been released. +:::::: + +::::::{tab-item} Linux +Version 9.0.0-beta1 of {{agent}} has not yet been released. +:::::: + +::::::{tab-item} Windows +Version 9.0.0-beta1 of {{agent}} has not yet been released. +:::::: + +::::::{tab-item} DEB +Version 9.0.0-beta1 of {{agent}} has not yet been released. +:::::: + +::::::{tab-item} RPM +Version 9.0.0-beta1 of {{agent}} has not yet been released. +:::::: + +::::::: + +### Step 2: Install and start the {{agent}} [logs-stream-install-agent] + +After downloading and extracting the installation package, you’re ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: + +::::{note} +On macOS, Linux (tar package), and Windows, run the `install` command to install and start {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, For these systems, you must enable and start the service. +:::: + + +:::::::{tab-set} + +::::::{tab-item} macOS +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. +:::: + + +```shell +sudo ./elastic-agent install +``` +:::::: + +::::::{tab-item} Linux +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. +:::: + + +```shell +sudo ./elastic-agent install +``` +:::::: + +::::::{tab-item} Windows +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, change to the directory where you installed {{agent}}, and run: + +```shell +.\elastic-agent.exe install +``` +:::::: + +::::::{tab-item} DEB +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. +:::: + + +```shell +sudo systemctl enable elastic-agent <1> +sudo systemctl start elastic-agent +``` + +1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. +:::::: + +::::::{tab-item} RPM +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. +:::: + + +```shell +sudo systemctl enable elastic-agent <1> +sudo systemctl start elastic-agent +``` + +1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. +:::::: + +::::::: +During installation, you’re prompted with some questions: + +1. When asked if you want to install the agent as a service, enter `Y`. +2. When asked if you want to enroll the agent in Fleet, enter `n`. + + +### Step 3: Configure the {{agent}} [logs-stream-agent-config] + +With your agent installed, configure it by updating the `elastic-agent.yml` file. + + +#### Locate your configuration file [logs-stream-yml-location] + +After installing the agent, you’ll find the `elastic-agent.yml` in one of the following locations according to your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +Main {{agent}} configuration file location: + +`/Library/Elastic/Agent/elastic-agent.yml` +:::::: + +::::::{tab-item} Linux +Main {{agent}} configuration file location: + +`/opt/Elastic/Agent/elastic-agent.yml` +:::::: + +::::::{tab-item} Windows +Main {{agent}} configuration file location: + +`C:\Program Files\Elastic\Agent\elastic-agent.yml` +:::::: + +::::::{tab-item} DEB +Main {{agent}} configuration file location: + +`/etc/elastic-agent/elastic-agent.yml` +:::::: + +::::::{tab-item} RPM +Main {{agent}} configuration file location: + +`/etc/elastic-agent/elastic-agent.yml` +:::::: + +::::::: + +#### Update your configuration file [logs-stream-example-config] + +The following is an example of a standalone {{agent}} configuration. To configure your {{agent}}, replace the contents of the `elastic-agent.yml` file with this configuration: + +```yaml +outputs: + default: + type: elasticsearch + hosts: ':' + api_key: 'your-api-key' +inputs: + - id: your-log-id + type: filestream + streams: + - id: your-log-stream-id + data_stream: + dataset: example + paths: + - /var/log/your-logs.log +``` + +Next, set the values for these fields: + +* `hosts` – Copy the {{es}} endpoint from **Help menu (![help icon](../../../images/observability-help-icon.png "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. +* `api-key` – Use an API key to grant the agent access to {{es}}. To create an API key for your agent, refer to the [Create API keys for standalone agents](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) documentation. + + ::::{note} + The API key format should be `:`. Make sure you selected **Beats** when you created your API key. Base64 encoded API keys are not currently supported in this configuration. + :::: + +* `inputs.id` – A unique identifier for your input. +* `type` – The type of input. For collecting logs, set this to `filestream`. +* `streams.id` – A unique identifier for your stream of log data. +* `data_stream.dataset` – The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`. +* `paths` – The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`. + + +#### Restart the {{agent}} [logs-stream-restart-agent] + +After updating your configuration file, you need to restart the {{agent}}: + +First, stop the {{agent}} and its related executables using the command that works with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```shell +sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} Linux +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} Windows +```shell +Stop-Service Elastic Agent +``` + +If necessary, use Task Manager on Windows to stop {{agent}}. This will kill the `elastic-agent` process and any sub-processes it created (such as {{beats}}). + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} DEB +The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to stop the agent: + +```shell +sudo systemctl stop elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::{tab-item} RPM +The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to stop the agent: + +```shell +sudo systemctl stop elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. +:::: +:::::: + +::::::: +Next, restart the {{agent}} using the command that works with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```shell +sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist +``` +:::::: + +::::::{tab-item} Linux +```shell +sudo service elastic-agent start +``` +:::::: + +::::::{tab-item} Windows +```shell +Start-Service Elastic Agent +``` +:::::: + +::::::{tab-item} DEB +The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to start the agent: + +```shell +sudo systemctl start elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent start +``` +:::::: + +::::::{tab-item} RPM +The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to start the agent: + +```shell +sudo systemctl start elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent start +``` +:::::: + +::::::: + +## Troubleshoot your {{agent}} configuration [logs-stream-troubleshooting] + +If you’re not seeing your log files in the UI, verify the following in the `elastic-agent.yml` file: + +* The path to your logs file under `paths` is correct. +* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you’ll need to create an API key in **Beats** format. + +If you’re still running into issues, see [{{agent}} troubleshooting](../../../troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md). + + +## Next steps [logs-stream-next-steps] + +After you have your agent configured and are streaming log data to {{es}}: + +* Refer to the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. +* Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. \ No newline at end of file From 229398b8f41b8da5c586ae8515dd6c0a1dc108a3 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 15:28:48 -0600 Subject: [PATCH 09/23] add log correlation --- ...bservability-correlate-application-logs.md | 94 ------------------ .../observability/application-logs.md | 91 ------------------ raw-migrated-files/toc.yml | 2 - .../logs/stream-application-logs.md | 96 +++++++++++++++++-- 4 files changed, 88 insertions(+), 195 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md delete mode 100644 raw-migrated-files/observability-docs/observability/application-logs.md diff --git a/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md b/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md deleted file mode 100644 index b1f4c61e6d..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md +++ /dev/null @@ -1,94 +0,0 @@ -# Stream application logs [observability-correlate-application-logs] - -Application logs provide valuable insight into events that have occurred within your services and applications. - -The format of your logs (structured or plaintext) influences your log ingestion strategy. - - -## Plaintext logs vs. structured Elastic Common Schema (ECS) logs [observability-correlate-application-logs-plaintext-logs-vs-structured-elastic-common-schema-ecs-logs] - -Logs are typically produced as either plaintext or structured. Plaintext logs contain only text and have no special formatting, for example: - -```txt -2019-08-06T12:09:12.375Z INFO:spring-petclinic: Tomcat started on port(s): 8080 (http) with context path, org.springframework.boot.web.embedded.tomcat.TomcatWebServer -2019-08-06T12:09:12.379Z INFO:spring-petclinic: Started PetClinicApplication in 7.095 seconds (JVM running for 9.082), org.springframework.samples.petclinic.PetClinicApplication -2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController -``` - -Structured logs follow a predefined, repeatable pattern or structure. This structure is applied at write time — preventing the need for parsing at ingest time. The Elastic Common Schema (ECS) defines a common set of fields to use when structuring logs. This structure allows logs to be easily ingested, and provides the ability to correlate, search, and aggregate on individual fields within your logs. - -For example, the previous example logs might look like this when structured with ECS-compatible JSON: - -```json -{"@timestamp":"2019-08-06T12:09:12.375Z", "log.level": "INFO", "message":"Tomcat started on port(s): 8080 (http) with context path ''", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.boot.web.embedded.tomcat.TomcatWebServer"} -{"@timestamp":"2019-08-06T12:09:12.379Z", "log.level": "INFO", "message":"Started PetClinicApplication in 7.095 seconds (JVM running for 9.082)", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.samples.petclinic.PetClinicApplication"} -{"@timestamp":"2019-08-06T14:08:40.199Z", "log.level":"DEBUG", "message":"init find form", "service.name":"spring-petclinic","process.thread.name":"http-nio-8080-exec-8","log.logger":"org.springframework.samples.petclinic.owner.OwnerController","transaction.id":"28b7fb8d5aba51f1","trace.id":"2869b25b5469590610fea49ac04af7da"} -``` - - -## Ingesting logs [observability-correlate-application-logs-ingesting-logs] - -There are several ways to ingest application logs into your project. Your specific situation helps determine the method that’s right for you. - - -### Plaintext logs [observability-correlate-application-logs-plaintext-logs] - -With {{filebeat}} or {{agent}}, you can ingest plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration. - -For plaintext logs to be useful, you need to use {{filebeat}} or {{agent}} to parse the log data. - -**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [Plaintext logs](../../../solutions/observability/logs/plaintext-application-logs.md)** - - -### ECS formatted logs [observability-correlate-application-logs-ecs-formatted-logs] - -Logs formatted in ECS don’t require manual parsing and the configuration can be reused across applications. They also include log correlation. You can format your logs in ECS by using ECS logging plugins or {{apm-agent}} ECS reformatting. - - -#### ECS logging plugins [observability-correlate-application-logs-ecs-logging-plugins] - -Add ECS logging plugins to your logging libraries to format your logs into ECS-compatible JSON that doesn’t require parsing. - -To use ECS logging, you need to modify your application and its log configuration. - -**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [ECS formatted logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)** - - -#### {{apm-agent}} log reformatting [observability-correlate-application-logs-apm-agent-log-reformatting] - -Some Elastic {{apm-agent}}s can automatically reformat application logs to ECS format without adding an ECS logger dependency or modifying the application. - -This feature is supported for the following {{apm-agent}}s: - -* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/configuration.md#config-log-ecs-formatting) -* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-reformatting) -* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-reformatting) - -**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [ECS formatted logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)** - - -### {{apm-agent}} log sending [observability-correlate-application-logs-apm-agent-log-sending] - -Automatically capture and send logs directly to the managed intake service using the {{apm-agent}} without using {{filebeat}} or {{agent}}. - -Log sending is supported in the Java {{apm-agent}}. - -**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [{{apm-agent}} log sending](../../../solutions/observability/logs/apm-agent-log-sending.md)** - - -## Log correlation [observability-correlate-application-logs-log-correlation] - -Correlate your application logs with trace events to: - -* view the context of a log and the parameters provided by a user -* view all logs belonging to a particular trace -* easily move between logs and traces when debugging application issues - -Learn more about log correlation in the agent-specific ingestion guides: - -* [Go](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/logs.md) -* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-correlation-ids) -* [.NET](asciidocalypse://docs/apm-agent-dotnet/docs/reference/ingestion-tools/apm-agent-dotnet/logs.md) -* [Node.js](asciidocalypse://docs/apm-agent-nodejs/docs/reference/ingestion-tools/apm-agent-nodejs/logs.md) -* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-correlation-ids) -* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/logs.md) diff --git a/raw-migrated-files/observability-docs/observability/application-logs.md b/raw-migrated-files/observability-docs/observability/application-logs.md deleted file mode 100644 index 97ccc75c7f..0000000000 --- a/raw-migrated-files/observability-docs/observability/application-logs.md +++ /dev/null @@ -1,91 +0,0 @@ -# Stream application logs [application-logs] - -Application logs provide valuable insight into events that have occurred within your services and applications. - -The format of your logs (structured or plaintext) influences your log ingestion strategy. - - -## Plaintext logs vs. structured Elastic Common Schema (ECS) logs [plaintext-logs-vs-structured-elastic-common-schema-ecs-logs] - -Logs are typically produced as either plaintext or structured. Plaintext logs contain only text and have no special formatting, for example: - -```txt -2019-08-06T12:09:12.375Z INFO:spring-petclinic: Tomcat started on port(s): 8080 (http) with context path, org.springframework.boot.web.embedded.tomcat.TomcatWebServer -2019-08-06T12:09:12.379Z INFO:spring-petclinic: Started PetClinicApplication in 7.095 seconds (JVM running for 9.082), org.springframework.samples.petclinic.PetClinicApplication -2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController -``` - -Structured logs follow a predefined, repeatable pattern or structure. This structure is applied at write time — preventing the need for parsing at ingest time. The Elastic Common Schema (ECS) defines a common set of fields to use when structuring logs. This structure allows logs to be easily ingested, and provides the ability to correlate, search, and aggregate on individual fields within your logs. - -For example, the previous example logs might look like this when structured with ECS-compatible JSON: - -```json -{"@timestamp":"2019-08-06T12:09:12.375Z", "log.level": "INFO", "message":"Tomcat started on port(s): 8080 (http) with context path ''", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.boot.web.embedded.tomcat.TomcatWebServer"} -{"@timestamp":"2019-08-06T12:09:12.379Z", "log.level": "INFO", "message":"Started PetClinicApplication in 7.095 seconds (JVM running for 9.082)", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.samples.petclinic.PetClinicApplication"} -{"@timestamp":"2019-08-06T14:08:40.199Z", "log.level":"DEBUG", "message":"init find form", "service.name":"spring-petclinic","process.thread.name":"http-nio-8080-exec-8","log.logger":"org.springframework.samples.petclinic.owner.OwnerController","transaction.id":"28b7fb8d5aba51f1","trace.id":"2869b25b5469590610fea49ac04af7da"} -``` - - -## Ingesting logs [ingesting-application-logs] - -There are several ways to ingest application logs into your project. Your specific situation helps determine the method that’s right for you. - - -### Plaintext logs [plaintext-logs-intro] - -With {{filebeat}} or {{agent}}, you can ingest plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration. - -For plaintext logs to be useful, you need to use {{filebeat}} or {{agent}} to parse the log data. - -**[Plaintext application logs](../../../solutions/observability/logs/plaintext-application-logs.md)** - - -### ECS formatted logs [ecs-formatted-logs-intro] - -Logs formatted in ECS don’t require manual parsing and the configuration can be reused across applications. They also include log correlation. You can format your logs in ECS by using ECS logging plugins or {{apm-agent}} ECS reformatting. - -* ECS logging plugins - - Add ECS logging plugins to your logging libraries to format your logs into ECS-compatible JSON that doesn’t require parsing. - - To use ECS logging, you need to modify your application and its log configuration. - -* {{apm-agent}} log reformatting - - Some Elastic {{apm-agent}}s can automatically reformat application logs to ECS format without adding an ECS logger dependency or modifying the application. - - This feature is supported for the following {{apm-agent}}s: - - * [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/configuration.md#config-log-ecs-formatting) - * [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-reformatting) - * [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-reformatting) - - -**[ECS formatted application logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)** - - -### {{apm-agent}} log sending [apm-agent-log-sending-intro] - -Automatically capture and send logs directly to the managed intake service using the {{apm-agent}} without using {{filebeat}} or {{agent}}. - -Log sending is supported in the Java {{apm-agent}}. - -**[{{apm-agent}} log sending](../../../solutions/observability/logs/apm-agent-log-sending.md)** - - -## Log correlation [log-correlation-intro] - -Correlate your application logs with trace events to: - -* view the context of a log and the parameters provided by a user -* view all logs belonging to a particular trace -* easily move between logs and traces when debugging application issues - -Learn more about log correlation in the agent-specific ingestion guides: - -* [Go](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/logs.md) -* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-correlation-ids) -* [.NET](asciidocalypse://docs/apm-agent-dotnet/docs/reference/ingestion-tools/apm-agent-dotnet/logs.md) -* [Node.js](asciidocalypse://docs/apm-agent-nodejs/docs/reference/ingestion-tools/apm-agent-nodejs/logs.md) -* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-correlation-ids) -* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/logs.md) diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 93dc3b8a39..eae0d20167 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -223,7 +223,6 @@ toc: - file: docs-content/serverless/observability-apm-agents-elastic-apm-agents.md - file: docs-content/serverless/observability-apm-get-started.md - file: docs-content/serverless/observability-apm-traces.md - - file: docs-content/serverless/observability-correlate-application-logs.md - file: docs-content/serverless/observability-ecs-application-logs.md - file: docs-content/serverless/observability-get-started.md - file: docs-content/serverless/observability-log-monitoring.md @@ -457,7 +456,6 @@ toc: - file: observability-docs/observability/apm-getting-started-apm-server.md - file: observability-docs/observability/apm-traces.md - file: observability-docs/observability/application-and-service-monitoring.md - - file: observability-docs/observability/application-logs.md - file: observability-docs/observability/incident-management.md - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md diff --git a/solutions/observability/logs/stream-application-logs.md b/solutions/observability/logs/stream-application-logs.md index 6a438769d4..e199931ef1 100644 --- a/solutions/observability/logs/stream-application-logs.md +++ b/solutions/observability/logs/stream-application-logs.md @@ -4,17 +4,97 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-correlate-application-logs.html --- -# Stream application logs +# Stream application logs [observability-correlate-application-logs] -% What needs to be done: Align serverless/stateful +Application logs provide valuable insight into events that have occurred within your services and applications. -% Use migrated content from existing pages that map to this page: +The format of your logs (structured or plaintext) influences your log ingestion strategy. -% - [ ] ./raw-migrated-files/observability-docs/observability/application-logs.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-correlate-application-logs.md -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +## Plaintext logs vs. structured Elastic Common Schema (ECS) logs [observability-correlate-application-logs-plaintext-logs-vs-structured-elastic-common-schema-ecs-logs] -$$$observability-correlate-application-logs-log-correlation$$$ +Logs are typically produced as either plaintext or structured. Plaintext logs contain only text and have no special formatting, for example: -$$$log-correlation-intro$$$ \ No newline at end of file +```txt +2019-08-06T12:09:12.375Z INFO:spring-petclinic: Tomcat started on port(s): 8080 (http) with context path, org.springframework.boot.web.embedded.tomcat.TomcatWebServer +2019-08-06T12:09:12.379Z INFO:spring-petclinic: Started PetClinicApplication in 7.095 seconds (JVM running for 9.082), org.springframework.samples.petclinic.PetClinicApplication +2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController +``` + +Structured logs follow a predefined, repeatable pattern or structure. This structure is applied at write time — preventing the need for parsing at ingest time. The Elastic Common Schema (ECS) defines a common set of fields to use when structuring logs. This structure allows logs to be easily ingested, and provides the ability to correlate, search, and aggregate on individual fields within your logs. + +For example, the previous example logs might look like this when structured with ECS-compatible JSON: + +```json +{"@timestamp":"2019-08-06T12:09:12.375Z", "log.level": "INFO", "message":"Tomcat started on port(s): 8080 (http) with context path ''", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.boot.web.embedded.tomcat.TomcatWebServer"} +{"@timestamp":"2019-08-06T12:09:12.379Z", "log.level": "INFO", "message":"Started PetClinicApplication in 7.095 seconds (JVM running for 9.082)", "service.name":"spring-petclinic","process.thread.name":"restartedMain","log.logger":"org.springframework.samples.petclinic.PetClinicApplication"} +{"@timestamp":"2019-08-06T14:08:40.199Z", "log.level":"DEBUG", "message":"init find form", "service.name":"spring-petclinic","process.thread.name":"http-nio-8080-exec-8","log.logger":"org.springframework.samples.petclinic.owner.OwnerController","transaction.id":"28b7fb8d5aba51f1","trace.id":"2869b25b5469590610fea49ac04af7da"} +``` + + +## Ingesting logs [observability-correlate-application-logs-ingesting-logs] + +There are several ways to ingest application logs into your project. Your specific situation helps determine the method that’s right for you. + + +### Plaintext logs [observability-correlate-application-logs-plaintext-logs] + +With {{filebeat}} or {{agent}}, you can ingest plaintext logs, including existing logs, from any programming language or framework without modifying your application or its configuration. + +For plaintext logs to be useful, you need to use {{filebeat}} or {{agent}} to parse the log data. + +**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [Plaintext logs](../../../solutions/observability/logs/plaintext-application-logs.md)** + + +### ECS formatted logs [observability-correlate-application-logs-ecs-formatted-logs] + +Logs formatted in ECS don’t require manual parsing and the configuration can be reused across applications. They also include log correlation. You can format your logs in ECS by using ECS logging plugins or {{apm-agent}} ECS reformatting. + + +#### ECS logging plugins [observability-correlate-application-logs-ecs-logging-plugins] + +Add ECS logging plugins to your logging libraries to format your logs into ECS-compatible JSON that doesn’t require parsing. + +To use ECS logging, you need to modify your application and its log configuration. + +**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [ECS formatted logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)** + + +#### {{apm-agent}} log reformatting [observability-correlate-application-logs-apm-agent-log-reformatting] + +Some Elastic {{apm-agent}}s can automatically reformat application logs to ECS format without adding an ECS logger dependency or modifying the application. + +This feature is supported for the following {{apm-agent}}s: + +* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/configuration.md#config-log-ecs-formatting) +* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-reformatting) +* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-reformatting) + +**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [ECS formatted logs](../../../solutions/observability/logs/ecs-formatted-application-logs.md)** + + +### {{apm-agent}} log sending [observability-correlate-application-logs-apm-agent-log-sending] + +Automatically capture and send logs directly to the managed intake service using the {{apm-agent}} without using {{filebeat}} or {{agent}}. + +Log sending is supported in the Java {{apm-agent}}. + +**![documentation icon](../../../images/serverless-documentation.svg "") Learn more in [{{apm-agent}} log sending](../../../solutions/observability/logs/apm-agent-log-sending.md)** + + +## Log correlation [observability-correlate-application-logs-log-correlation] + +Correlate your application logs with trace events to: + +* view the context of a log and the parameters provided by a user +* view all logs belonging to a particular trace +* easily move between logs and traces when debugging application issues + +Learn more about log correlation in the agent-specific ingestion guides: + +* [Go](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/logs.md) +* [Java](asciidocalypse://docs/apm-agent-java/docs/reference/ingestion-tools/apm-agent-java/logs.md#log-correlation-ids) +* [.NET](asciidocalypse://docs/apm-agent-dotnet/docs/reference/ingestion-tools/apm-agent-dotnet/logs.md) +* [Node.js](asciidocalypse://docs/apm-agent-nodejs/docs/reference/ingestion-tools/apm-agent-nodejs/logs.md) +* [Python](asciidocalypse://docs/apm-agent-python/docs/reference/ingestion-tools/apm-agent-python/logs.md#log-correlation-ids) +* [Ruby](asciidocalypse://docs/apm-agent-ruby/docs/reference/ingestion-tools/apm-agent-ruby/logs.md) \ No newline at end of file From d5ffbf08754d9094109d8cfb121a7008f33d7990 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 15:39:05 -0600 Subject: [PATCH 10/23] fix links --- .../observability-stream-log-files.md | 477 ++++++++++++++++++ .../logs/plaintext-application-logs.md | 2 +- 2 files changed, 478 insertions(+), 1 deletion(-) create mode 100644 raw-migrated-files/docs-content/serverless/observability-stream-log-files.md diff --git a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md new file mode 100644 index 0000000000..a334c67a80 --- /dev/null +++ b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md @@ -0,0 +1,477 @@ +# Stream any log file [observability-stream-log-files] + +::::{admonition} Required role +:class: note + +The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). + +:::: + + +
+:::{image} ../../../images/serverless-logs-stream-logs-api-key-beats.png +:alt: logs stream logs api key beats +:class: screenshot +::: + +:::{image} ../../../images/serverless-log-copy-es-endpoint.png +:alt: Copy a project's Elasticsearch endpoint +:class: screenshot +::: + +
+This guide shows you how to send a log file to your Observability project using a standalone {{agent}} and configure the {{agent}} and your data streams using the `elastic-agent.yml` file, and query your logs using the data streams you’ve set up. + +The quickest way to get started is using the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information. + +To install and configure the {{agent}} manually, refer to [Manually install and configure the standalone {{agent}}](../../../solutions/observability/logs/stream-any-log-file.md#manually-install-agent-logs). + + +## Manually install and configure the standalone {{agent}} [manually-install-agent-logs] + +If you’re not using the guided instructions, follow these steps to manually install and configure your the {{agent}}. + + +### Step 1: Download and extract the {{agent}} installation package [observability-stream-log-files-step-1-download-and-extract-the-agent-installation-package] + +On your host, download and extract the installation package that corresponds with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-darwin-x86_64.tar.gz +tar xzvf elastic-agent-8.16.1-darwin-x86_64.tar.gz +``` +:::::: + +::::::{tab-item} Linux +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-linux-x86_64.tar.gz +tar xzvf elastic-agent-8.16.1-linux-x86_64.tar.gz +``` +:::::: + +::::::{tab-item} Windows +```powershell +# PowerShell 5.0+ +wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-windows-x86_64.zip -OutFile elastic-agent-8.16.1-windows-x86_64.zip +Expand-Archive .\elastic-agent-8.16.1-windows-x86_64.zip +``` + +Or manually: + +1. Download the {{agent}} Windows zip file from the [download page](https://www.elastic.co/downloads/beats/elastic-agent). +2. Extract the contents of the zip file. +:::::: + +::::::{tab-item} DEB +::::{important} +To simplify upgrading to future versions of {{agent}}, we recommended that you use the tarball distribution instead of the DEB distribution. + +:::: + + +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-amd64.deb +sudo dpkg -i elastic-agent-8.16.1-amd64.deb +``` +:::::: + +::::::{tab-item} RPM +::::{important} +To simplify upgrading to future versions of {{agent}}, we recommended that you use the tarball distribution instead of the RPM distribution. + +:::: + + +```sh +curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-8.16.1-x86_64.rpm +sudo rpm -vi elastic-agent-8.16.1-x86_64.rpm +``` +:::::: + +::::::: + +### Step 2: Install and start the {{agent}} [observability-stream-log-files-step-2-install-and-start-the-agent] + +After downloading and extracting the installation package, you’re ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: + +::::{note} +On macOS, Linux (tar package), and Windows, run the `install` command to install and start {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, For these systems, you must enable and start the service. + +:::: + + +:::::::{tab-set} + +::::::{tab-item} macOS +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. + +:::: + + +```shell +sudo ./elastic-agent install +``` +:::::: + +::::::{tab-item} Linux +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. + +:::: + + +```shell +sudo ./elastic-agent install +``` +:::::: + +::::::{tab-item} Windows +Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). + +From the PowerShell prompt, change to the directory where you installed {{agent}}, and run: + +```shell +.\elastic-agent.exe install +``` +:::::: + +::::::{tab-item} DEB +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. + +:::: + + +```shell +sudo systemctl enable elastic-agent <1> +sudo systemctl start elastic-agent +``` + +1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. +:::::: + +::::::{tab-item} RPM +::::{tip} +You must run this command as the root user because some integrations require root privileges to collect sensitive data. + +:::: + + +```shell +sudo systemctl enable elastic-agent <1> +sudo systemctl start elastic-agent +``` + +1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. +:::::: + +::::::: +During installation, you’ll be prompted with some questions: + +1. When asked if you want to install the agent as a service, enter `Y`. +2. When asked if you want to enroll the agent in Fleet, enter `n`. + + +### Step 3: Configure the {{agent}} [observability-stream-log-files-step-3-configure-the-agent] + +After your agent is installed, configure it by updating the `elastic-agent.yml` file. + + +#### Locate your configuration file [observability-stream-log-files-locate-your-configuration-file] + +You’ll find the `elastic-agent.yml` in one of the following locations according to your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +Main {{agent}} configuration file location: + +`/Library/Elastic/Agent/elastic-agent.yml` +:::::: + +::::::{tab-item} Linux +Main {{agent}} configuration file location: + +`/opt/Elastic/Agent/elastic-agent.yml` +:::::: + +::::::{tab-item} Windows +Main {{agent}} configuration file location: + +`C:\Program Files\Elastic\Agent\elastic-agent.yml` +:::::: + +::::::{tab-item} DEB +Main {{agent}} configuration file location: + +`/etc/elastic-agent/elastic-agent.yml` +:::::: + +::::::{tab-item} RPM +Main {{agent}} configuration file location: + +`/etc/elastic-agent/elastic-agent.yml` +:::::: + +::::::: + +#### Update your configuration file [observability-stream-log-files-update-your-configuration-file] + +Update the default configuration in the `elastic-agent.yml` file manually. It should look something like this: + +```yaml +outputs: + default: + type: elasticsearch + hosts: ':' + api_key: 'your-api-key' +inputs: + - id: your-log-id + type: filestream + streams: + - id: your-log-stream-id + data_stream: + dataset: example + paths: + - /var/log/your-logs.log +``` + +You need to set the values for the following fields: + +`hosts` +: Copy the {{es}} endpoint from your project’s page and add the port (the default port is `443`). For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. + + If you’re following the guided instructions in your project, the {{es}} endpoint will be prepopulated in the configuration file. + + :::::{tip} + If you need to find your project’s {{es}} endpoint outside the guided instructions: + + 1. Go to the **Projects** page that lists all your projects. + 2. Click **Manage** next to the project you want to connect to. + 3. Click **View** next to *Endpoints*. + 4. Copy the *Elasticsearch endpoint*. + + :::{image} ../../../images/serverless-log-copy-es-endpoint.png + :alt: Copy a project's Elasticsearch endpoint + :class: screenshot + ::: + + ::::: + + +`api-key` +: Use an API key to grant the agent access to your project. The API key format should be `:`. + + If you’re following the guided instructions in your project, an API key will be autogenerated and will be prepopulated in the downloadable configuration file. + + If configuring the {{agent}} manually, create an API key: + + 1. Navigate to **Project settings** → **Management*** → ***API keys** and click **Create API key**. + 2. Select **Restrict privileges** and add the following JSON to give privileges for ingesting logs. + + ```json + { + "standalone_agent": { + "cluster": [ + "monitor" + ], + "indices": [ + { + "names": [ + "logs-*-*" + ], + "privileges": [ + "auto_configure", "create_doc" + ] + } + ] + } + } + ``` + + 3. You *must* set the API key to configure {{beats}}. Immediately after the API key is generated and while it is still being displayed, click the **Encoded** button next to the API key and select **Beats** from the list in the tooltip. Base64 encoded API keys are not currently supported in this configuration. + + :::{image} ../../../images/serverless-logs-stream-logs-api-key-beats.png + :alt: logs stream logs api key beats + :class: screenshot + ::: + + +`inputs.id` +: A unique identifier for your input. + +`type` +: The type of input. For collecting logs, set this to `filestream`. + +`streams.id` +: A unique identifier for your stream of log data. + +`data_stream.dataset` +: The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`. + +`paths` +: The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`. + + +#### Restart the {{agent}} [observability-stream-log-files-restart-the-agent] + +After updating your configuration file, you need to restart the {{agent}}. + +First, stop the {{agent}} and its related executables using the command that works with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```shell +sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. + +:::: +:::::: + +::::::{tab-item} Linux +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. + +:::: +:::::: + +::::::{tab-item} Windows +```shell +Stop-Service Elastic Agent +``` + +If necessary, use Task Manager on Windows to stop {{agent}}. This will kill the `elastic-agent` process and any sub-processes it created (such as {{beats}}). + +::::{note} +{{agent}} will restart automatically if the system is rebooted. + +:::: +:::::: + +::::::{tab-item} DEB +The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to stop the agent: + +```shell +sudo systemctl stop elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. + +:::: +:::::: + +::::::{tab-item} RPM +The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to stop the agent: + +```shell +sudo systemctl stop elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent stop +``` + +::::{note} +{{agent}} will restart automatically if the system is rebooted. + +:::: +:::::: + +::::::: +Next, restart the {{agent}} using the command that works with your system: + +:::::::{tab-set} + +::::::{tab-item} macOS +```shell +sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist +``` +:::::: + +::::::{tab-item} Linux +```shell +sudo service elastic-agent start +``` +:::::: + +::::::{tab-item} Windows +```shell +Start-Service Elastic Agent +``` +:::::: + +::::::{tab-item} DEB +The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to start the agent: + +```shell +sudo systemctl start elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent start +``` +:::::: + +::::::{tab-item} RPM +The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. + +Use `systemctl` to start the agent: + +```shell +sudo systemctl start elastic-agent +``` + +Otherwise, use: + +```shell +sudo service elastic-agent start +``` +:::::: + +::::::: + +## Troubleshoot your {{agent}} configuration [observability-stream-log-files-troubleshoot-your-agent-configuration] + +If you’re not seeing your log files in your project, verify the following in the `elastic-agent.yml` file: + +* The path to your logs file under `paths` is correct. +* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you’ll need to create an API key in **Beats** format. + +If you’re still running into issues, refer to [{{agent}} troubleshooting](../../../troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/configure-standalone-elastic-agents.md). + + +## Next steps [observability-stream-log-files-next-steps] + +After you have your agent configured and are streaming log data to your project: + +* Refer to the [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. +* Refer to the [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. diff --git a/solutions/observability/logs/plaintext-application-logs.md b/solutions/observability/logs/plaintext-application-logs.md index 724a23a563..b744c1ebc9 100644 --- a/solutions/observability/logs/plaintext-application-logs.md +++ b/solutions/observability/logs/plaintext-application-logs.md @@ -254,7 +254,7 @@ PUT _ingest/pipeline/filebeat* <1> 4. `pattern`: The pattern of the elements in your log data. The pattern varies depending on your log format. `%{@timestamp}` is required. `%{log.level}`, `%{host.ip}`, and `%{{message}}` are common [ECS](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) fields. This pattern would match a log file in this format: `2023-11-07T09:39:01.012Z ERROR 192.168.1.110 Server hardware failure detected.` -Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#logs-stream-parse) for more on using ingest pipelines to parse your log data. +Refer to [Extract structured fields](../../../solutions/observability/logs/parse-route-logs.md#observability-parse-log-data-extract-structured-fields) for more on using ingest pipelines to parse your log data. After creating your pipeline, specify the pipeline for filebeat in the `filebeat.yml` file: From f83c676e77adf49030267fc04fdb44e51409a48b Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 15:49:48 -0600 Subject: [PATCH 11/23] fix toc --- raw-migrated-files/toc.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index eae0d20167..d0b6a32799 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -228,6 +228,7 @@ toc: - file: docs-content/serverless/observability-log-monitoring.md - file: docs-content/serverless/observability-monitor-datasets.md - file: docs-content/serverless/observability-plaintext-application-logs.md + - file: docs-content/serverless/observability-stream-log-files.md - file: docs-content/serverless/project-and-management-settings.md - file: docs-content/serverless/project-setting-data.md - file: docs-content/serverless/project-settings-alerts.md From edb50b666c3982490c078bc0ed4603c21cb6193e Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 16:10:40 -0600 Subject: [PATCH 12/23] fix link --- .../docs-content/serverless/observability-stream-log-files.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md index a334c67a80..51a15b22e3 100644 --- a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md +++ b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md @@ -24,7 +24,7 @@ This guide shows you how to send a log file to your Observability project using The quickest way to get started is using the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information. -To install and configure the {{agent}} manually, refer to [Manually install and configure the standalone {{agent}}](../../../solutions/observability/logs/stream-any-log-file.md#manually-install-agent-logs). +To install and configure the {{agent}} manually, refer to [Manually install and configure the standalone {{agent}}](../../../solutions/observability/logs/stream-any-log-file.md). ## Manually install and configure the standalone {{agent}} [manually-install-agent-logs] From 7e77d353a6c20a976d88c28bc5c5968c1e3bd2c9 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 16:40:47 -0600 Subject: [PATCH 13/23] add apps --- .../application-and-service-monitoring.md | 16 ---- .../application-and-service-monitoring.md | 22 ------ raw-migrated-files/toc.yml | 2 - solutions/observability/apps.md | 19 +++-- .../monitor-aws-with-amazon-data-firehose.md | 78 ------------------- .../serverless-observability-limitations.md | 11 --- .../unknown-bucket/view-monitor-status.md | 67 ---------------- solutions/toc.yml | 5 +- 8 files changed, 14 insertions(+), 206 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/application-and-service-monitoring.md delete mode 100644 raw-migrated-files/observability-docs/observability/application-and-service-monitoring.md delete mode 100644 solutions/observability/unknown-bucket/monitor-aws-with-amazon-data-firehose.md delete mode 100644 solutions/observability/unknown-bucket/serverless-observability-limitations.md delete mode 100644 solutions/observability/unknown-bucket/view-monitor-status.md diff --git a/raw-migrated-files/docs-content/serverless/application-and-service-monitoring.md b/raw-migrated-files/docs-content/serverless/application-and-service-monitoring.md deleted file mode 100644 index 80b211fcda..0000000000 --- a/raw-migrated-files/docs-content/serverless/application-and-service-monitoring.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -navigation_title: "Applications and services" ---- - -# Application and service monitoring [application-and-service-monitoring] - - -Explore the topics in this section to learn how to observe and monitor software applications and services running in your environment. - -| | | -| --- | --- | -| [Application performance monitoring (APM)](../../../solutions/observability/apps/application-performance-monitoring-apm.md) | Monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | -| [Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) | Monitor the availability of network endpoints and services. | - - - diff --git a/raw-migrated-files/observability-docs/observability/application-and-service-monitoring.md b/raw-migrated-files/observability-docs/observability/application-and-service-monitoring.md deleted file mode 100644 index 3deb5d9d76..0000000000 --- a/raw-migrated-files/observability-docs/observability/application-and-service-monitoring.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -navigation_title: "Applications and services" ---- - -# Application and service monitoring [application-and-service-monitoring] - - -Explore the topics in this section to learn how to observe and monitor software applications and services running in your environment. - -| | | -| --- | --- | -| [Application performance monitoring (APM)](../../../solutions/observability/apps/application-performance-monitoring-apm.md) | Monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | -| [Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) | Monitor the availability of network endpoints and services. | -| [Real user monitoring](../../../solutions/observability/apps/real-user-monitoring-user-experience.md) | Quantify and analyze the perceived performance of your web application using real-world user experiences. | -| [Uptime monitoring (deprecated)](../../../solutions/observability/apps/uptime-monitoring-deprecated.md) | Periodically check the status of your services and applications. | -| [Tutorial: Monitor a Java application](../../../solutions/observability/apps/tutorial-monitor-java-application.md) | Monitor a Java application using Elastic Observability: Logs, Infrastructure metrics, APM, and Uptime. | - - - - - - diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index d0b6a32799..f07447757a 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -193,7 +193,6 @@ toc: children: - file: docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md - file: docs-content/serverless/ai-assistant-knowledge-base.md - - file: docs-content/serverless/application-and-service-monitoring.md - file: docs-content/serverless/attack-discovery.md - file: docs-content/serverless/connect-to-byo-llm.md - file: docs-content/serverless/cspm-required-permissions.md @@ -456,7 +455,6 @@ toc: - file: observability-docs/observability/apm-agents.md - file: observability-docs/observability/apm-getting-started-apm-server.md - file: observability-docs/observability/apm-traces.md - - file: observability-docs/observability/application-and-service-monitoring.md - file: observability-docs/observability/incident-management.md - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md diff --git a/solutions/observability/apps.md b/solutions/observability/apps.md index d51037b732..e7fca6ee2a 100644 --- a/solutions/observability/apps.md +++ b/solutions/observability/apps.md @@ -2,14 +2,21 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/application-and-service-monitoring.html - https://www.elastic.co/guide/en/observability/current/application-and-service-monitoring.html + +navigation_title: "Applications and services" --- -# Applications and services +# Application and service monitoring [application-and-service-monitoring] + -% What needs to be done: Refine +Explore the topics in this section to learn how to observe and monitor software applications and services running in your environment. -% Use migrated content from existing pages that map to this page: +% Stateful only for RUM and Uptime and Tutorial -% - [ ] ./raw-migrated-files/docs-content/serverless/application-and-service-monitoring.md -% Notes: Needs quickstart install steps (local,cloud,serverless) -% - [ ] ./raw-migrated-files/observability-docs/observability/application-and-service-monitoring.md \ No newline at end of file +| | | +| --- | --- | +| [Application performance monitoring (APM)](../../../solutions/observability/apps/application-performance-monitoring-apm.md) | Monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | +| [Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) | Monitor the availability of network endpoints and services. | +| [Real user monitoring](../../../solutions/observability/apps/real-user-monitoring-user-experience.md) | Quantify and analyze the perceived performance of your web application using real-world user experiences. | +| [Uptime monitoring (deprecated)](../../../solutions/observability/apps/uptime-monitoring-deprecated.md) | Periodically check the status of your services and applications. | +| [Tutorial: Monitor a Java application](../../../solutions/observability/apps/tutorial-monitor-java-application.md) | Monitor a Java application using Elastic Observability: Logs, Infrastructure metrics, APM, and Uptime. | diff --git a/solutions/observability/unknown-bucket/monitor-aws-with-amazon-data-firehose.md b/solutions/observability/unknown-bucket/monitor-aws-with-amazon-data-firehose.md deleted file mode 100644 index f518ab7c81..0000000000 --- a/solutions/observability/unknown-bucket/monitor-aws-with-amazon-data-firehose.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -navigation_title: "Monitor {{aws}} with Amazon Data Firehose" -mapped_pages: - - https://www.elastic.co/guide/en/observability/current/ingest-aws-firehose.html ---- - - - -# Monitor AWS with Amazon Data Firehose [ingest-aws-firehose] - - -Amazon Data Firehose is a popular service that allows you to send your service logs and monitoring metrics to Elastic in minutes without a single line of code and without building or managing your own data ingestion and delivery infrastructure. - - -## What you’ll learn [aws-elastic-firehose-what-you-learn] - -In this tutorial, you’ll learn how to: - -* Install AWS integration in {{kib}} -* Create a delivery stream in Amazon Data Firehose -* Specify the destination settings for your Firehose stream -* Send data to the Firehose delivery stream - - -## Before you begin [aws-elastic-firehose-before-you-begin] - -Create a deployment in AWS regions (including gov cloud) using our hosted {{ess}} on [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body). The deployment includes an {{es}} cluster for storing and searching your data, and {{kib}} for visualizing and managing your data. - - -## Step 1: Install AWS integration in {{kib}} [firehose-step-one] - -1. Install AWS integrations to load index templates, ingest pipelines, and dashboards into {{kib}}. Find **Integrations** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Find the AWS Integration by browsing the catalog. -2. Navigate to the **Settings** tab and click **Install AWS assets**. Confirm by clicking **Install AWS** in the popup. -3. Install Amazon Data Firehose integration assets in Kibana. - - -## Step 2: Create a delivery stream in Amazon Data Firehose [firehose-step-two] - -1. Go to the [AWS console](https://console.aws.amazon.com/) and navigate to Amazon Data Firehose. -2. Click **Create Firehose stream** and choose the source and destination of your Firehose stream. Unless you are streaming data from Kinesis Data Streams, set source to `Direct PUT` and destination to `Elastic`. -3. Provide a meaningful **Firehose stream name** that will allow you to identify this delivery stream later. - -::::{note} -For advanced use cases, source records can be transformed by invoking a custom Lambda function. When using Elastic integrations, this should not be required. -:::: - - - -## Step 3: Specify the destination settings for your Firehose stream [firehose-step-three] - -1. From the **Destination settings** panel, specify the following settings: - - * **Elastic endpoint URL**: Enter the Elastic endpoint URL of your Elasticsearch cluster. To find the Elasticsearch endpoint, go to the Elastic Cloud console and select **Connection details**. Here is an example of how it looks like: `https://my-deployment.es.us-east-1.aws.elastic-cloud.com`. - * **API key**: Enter the encoded Elastic API key. To create an API key, go to the Elastic Cloud console, select **Connection details** and click **Create and manage API keys**. If you are using an API key with **Restrict privileges**, make sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream. - * **Content encoding**: For a better network efficiency, leave content encoding set to GZIP. - * **Retry duration**: Determines how long Firehose continues retrying the request in the event of an error. A duration of 60-300s should be suitable for most use cases. - * **Parameters**: - - * `es_datastream_name`: This parameter is optional and can be used to set which data stream documents will be stored. If this parameter is not specified, data is sent to the `logs-awsfirehose-default` data stream by default. - * `include_cw_extracted_fields`: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation. - * `set_es_document_id`: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance. - - 1. In the **Backup settings** panel, it is recommended to configure S3 backup for failed records. It’s then possible to configure workflows to automatically retry failed records, for example by using [Elastic Serverless Forwarder](asciidocalypse://docs/elastic-serverless-forwarder/docs/reference/ingestion-tools/esf/index.md). - - - -## Step 4: Send data to the Firehose delivery stream [firehose-step-four] - -You can configure a variety of log sources to send data to Firehose streams directly for example VPC flow logs. Some services don’t support publishing logs directly to Firehose but they do support publishing logs to CloudWatch logs, such as CloudTrail and Lambda. Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.md) for more information. - -For example, a typical workflow for sending CloudTrail logs to Firehose would be the following: - -* Publish CloudTrail logs to a Cloudwatch log group. Refer to the AWS documentation [about publishing CloudTrail logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/monitor-cloudtrail-log-files-with-cloudwatch-logs.md). -* Create a subscription filter in the CloudWatch log group to the Firehose stream. Refer to the AWS documentation [about using subscription filters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.md#FirehoseExample). - -We also added support for sending CloudWatch monitoring metrics to Elastic using Firehose. For example, you can configure metrics ingestion by creating a metric stream through CloudWatch. You can select an existing Firehose stream by choosing the option **Custom setup with Firehose**. For more information, refer to the AWS documentation [about the custom setup with Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-setup-datalake.md). - -For more information on Amazon Data Firehose, you can also check the [Amazon Data Firehose Integrations documentation](https://docs.elastic.co/integrations/awsfirehose). diff --git a/solutions/observability/unknown-bucket/serverless-observability-limitations.md b/solutions/observability/unknown-bucket/serverless-observability-limitations.md deleted file mode 100644 index f32e87f86d..0000000000 --- a/solutions/observability/unknown-bucket/serverless-observability-limitations.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-limitations.html ---- - -# Serverless observability limitations [observability-limitations] - -Currently, the maximum ingestion rate for the Managed Intake Service (APM and OpenTelemetry ingest) is 11.5 MB/s of uncompressed data (roughly 1TB/d uncompressed equivalent). Ingestion at a higher rate may experience rate limiting or ingest failures. - -If you believe you are experiencing rate limiting or other ingest-based failures, please [contact Elastic Support](../../../troubleshoot/index.md) for assistance. - diff --git a/solutions/observability/unknown-bucket/view-monitor-status.md b/solutions/observability/unknown-bucket/view-monitor-status.md deleted file mode 100644 index 2f46b01da6..0000000000 --- a/solutions/observability/unknown-bucket/view-monitor-status.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/observability/current/view-monitor-status.html ---- - -# View monitor status [view-monitor-status] - -The **Monitors** page provides you with a high-level view of all the services you are monitoring to help you quickly diagnose outages and other connectivity issues within your network. - -To access this page, go to **{{observability}} > Uptime > Monitors**. - -::::{important} -Each endpoint, URL, and service represents a *monitor*. - -:::: - - - -## Filter monitors [filter-monitors] - -To get started with your analysis, use the automated filter options, such as location, port, scheme, and tags, or define a custom filter by field, URL, monitor ID, and other attributes. - -:::{image} ../../../images/observability-uptime-filter-bar.png -:alt: Uptime filter bar -:class: screenshot -::: - - -## Monitor availability [monitor-availability] - -The snapshot panel displays the overall status of the environment you’re monitoring or a subset of those monitors. You can see the total number of detected monitors within the selected date range, based on the last check reported by {{heartbeat}}, along with the number of monitors in an `up` or `down` state. - -Next to the counts, a histogram shows a count of **Pings over time** with a breakdown of `Up` and `Down` counts per time bucket. - -:::{image} ../../../images/observability-monitors-chart.png -:alt: Monitors chart -:class: screenshot -::: - -Information about individual monitors is displayed in the monitor list and provides a quick way to navigate to a detailed visualization for hosts or endpoints. - -The information displayed includes the recent status of a host or endpoint, when the monitor was last checked, its URL, and, if applicable, the TLS certificate expiration time. There is also a sparkline showing downtime history. - -::::{tip} -Use monitor tags to display a custom assortment of monitors; for example, consider assigning tags based on a monitor’s hosted cloud provider – making it easy to quickly see all monitors hosted on GCP, AWS, etc. - -:::: - - -Expand the table row for a specific monitor on the list to view additional information such as which alerts are configured for the monitor, a recent error and when it occurred, the date and time of any recent test runs, and it’s URL. - -:::{image} ../../../images/observability-monitors-list.png -:alt: Monitors list -:class: screenshot -::: - - -## Integrate with other Observability apps [observability-integrations] - -The Monitor list also contains a menu of available integrations. Expand the table row for a specific monitor on the list, and then click **Investigate**. - -Depending on the features you have installed and configured, you can view logs, metrics, or APM data relating to that monitor. You can choose: - -* Show host, pod, or container logs in the [{{logs-app}}](../logs/explore-logs.md). -* Show APM data in the [Applications UI](../apps/traces-2.md). -* Show host, pod, or container metrics in the [{{infrastructure-app}}](/solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md). - diff --git a/solutions/toc.yml b/solutions/toc.yml index d32d9fa8e0..aee610db8a 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -370,9 +370,6 @@ toc: - file: observability/tools-and-apis.md - file: observability/unknown-bucket.md children: - - file: observability/unknown-bucket/view-monitor-status.md - - file: observability/unknown-bucket/monitor-aws-with-amazon-data-firehose.md - - file: observability/unknown-bucket/serverless-observability-limitations.md - file: observability/unknown-bucket/host-metrics.md - file: observability/unknown-bucket/container-metrics.md - file: observability/unknown-bucket/kubernetes-pod-metrics.md @@ -649,7 +646,7 @@ toc: - file: search/rag/playground-query.md - file: search/rag/playground-troubleshooting.md - file: search/hybrid-search.md - children: + children: - file: search/hybrid-semantic-text.md - file: search/ranking.md children: From d8cc25a1ebf955767c9e11858d3d1687428f9252 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 16:44:37 -0600 Subject: [PATCH 14/23] add monitor datasets --- .../observability-monitor-datasets.md | 63 ------------------ .../observability/monitor-datasets.md | 63 ------------------ raw-migrated-files/toc.yml | 2 - .../data-set-quality-monitoring.md | 65 +++++++++++++++++-- 4 files changed, 60 insertions(+), 133 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-monitor-datasets.md delete mode 100644 raw-migrated-files/observability-docs/observability/monitor-datasets.md diff --git a/raw-migrated-files/docs-content/serverless/observability-monitor-datasets.md b/raw-migrated-files/docs-content/serverless/observability-monitor-datasets.md deleted file mode 100644 index 35279f1ec1..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-monitor-datasets.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -navigation_title: "Data set quality" ---- - -# Data set quality monitoring [observability-monitor-datasets] - - -[beta] - -The **Data Set Quality** page provides an overview of your log, metric, trace, and synthetic data sets. Use this information to get an idea of your overall data set quality and find data sets that contain incorrectly parsed documents. - -Access the Data Set Quality page from the main menu at **Project settings** → **Management*** → ***Data Set Quality**. By default, the page only shows log data sets. To see other data set types, select them from the **Type** menu. - -::::{admonition} Requirements -:class: note - -Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index. - -:::: - - -The quality of your data sets is based on the percentage of degraded documents in each data set. A degraded document in a data set contains the [`_ignored`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-ignored-field.md) property because one or more of its fields were ignored during indexing. Fields are ignored for a variety of reasons. For example, when the [`ignore_malformed`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-ignored-field.md) parameter is set to true, if a document field contains the wrong data type, the malformed field is ignored and the rest of the document is indexed. - -From the data set table, you’ll find information for each data set such as its namespace, when the data set was last active, and the percentage of degraded docs. The percentage of degraded documents determines the data set’s quality according to the following scale: - -* Good (![Good icon](../../../images/serverless-green-dot-icon.png "")): 0% of the documents in the data set are degraded. -* Degraded (![Degraded icon](../../../images/serverless-yellow-dot-icon.png "")): Greater than 0% and up to 3% of the documents in the data set are degraded. -* Poor (![Poor icon](../../../images/serverless-red-dot-icon.png "")): Greater than 3% of the documents in the data set are degraded. - -Opening the details of a specific data set shows the degraded documents history, a summary for the data set, and other details that can help you determine if you need to investigate any issues. - - -## Investigate issues [observability-monitor-datasets-investigate-issues] - -The Data Set Quality page has a couple of different ways to help you find ignored fields and investigate issues. From the data set table, you can open the data set’s details page, and view commonly ignored fields and information about those fields. Open a logs data set in Logs Explorer or other data set types in Discover to find ignored fields in individual documents. - - -### Find ignored fields in data sets [observability-monitor-datasets-find-ignored-fields-in-data-sets] - -To open the details page for a data set with poor or degraded quality and view ignored fields: - -1. From the data set table, click ![expand icon](../../../images/serverless-expand.svg "") next to a data set with poor or degraded quality. -2. From the details, scroll down to **Quality issues**. - -The **Quality issues** section shows fields that have been ignored, the number of documents that contain ignored fields, and the timestamp of last occurrence of the field being ignored. - - -### Find ignored fields in individual logs [observability-monitor-datasets-find-ignored-fields-in-individual-logs] - -To use Logs Explorer or Discover to find ignored fields in individual logs: - -1. Find data sets with degraded documents using the **Degraded Docs** column of the data sets table. -2. Click the percentage in the **Degraded Docs** column to open the data set in Logs Explorer or Discover. - -The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon (![degraded document icon](../../../images/serverless-indexClose.svg "")). - -Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue: - -1. Under the **actions** column, click ![expand icon](../../../images/serverless-expand.svg "") to open the document details. -2. Select the **JSON** tab. -3. Scroll towards the end of the JSON to find the `ignored_field_values`. - -Here, you’ll find all of the `_ignored` fields in the document and their values, which should provide some clues as to why the fields were ignored. diff --git a/raw-migrated-files/observability-docs/observability/monitor-datasets.md b/raw-migrated-files/observability-docs/observability/monitor-datasets.md deleted file mode 100644 index 31b605510b..0000000000 --- a/raw-migrated-files/observability-docs/observability/monitor-datasets.md +++ /dev/null @@ -1,63 +0,0 @@ -# Data set quality [monitor-datasets] - -[beta] - -The **Data Set Quality** page provides an overview of your log, metric, trace, and synthetic data sets. Use this information to get an idea of your overall data set quality and find data sets that contain incorrectly parsed documents. - -To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). By default, the page only shows log data sets. To see other data set types, select them from the **Type** menu. - -:::{image} ../../../images/observability-data-set-quality-overview.png -:alt: Screen capture of the data set overview -:class: screenshot -::: - -::::{admonition} Requirements -:class: note - -Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index. - -:::: - - -The quality of your data sets is based on the percentage of degraded documents in each data set. A degraded document in a data set contains the [_ignored](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-ignored-field.md) property because one or more of its fields were ignored during indexing. Fields are ignored for a variety of reasons. For example, when the [ignore_malformed](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-ignored-field.md) parameter is set to true, if a document field contains the wrong data type, the malformed field is ignored and the rest of the document is indexed. - -From the data set table, you’ll find information for each data set such as its namespace, size, when the data set was last active, and the percentage of degraded docs. The percentage of degraded documents determines the data set’s quality according to the following scale: - -* Good (![Good icon](../../../images/observability-green-dot-icon.png "")): 0% of the documents in the data set are degraded. -* Degraded (![Degraded icon](../../../images/observability-yellow-dot-icon.png "")): Greater than 0% and up to 3% of the documents in the data set are degraded. -* Poor (![Poor icon](../../../images/observability-red-dot-icon.png "")): Greater than 3% of the documents in the data set are degraded. - -Opening the details of a specific data set shows the degraded documents history, a summary for the data set, and other details that can help you determine if you need to investigate any issues. - - -## Investigate issues [investigate-issues] - -The Data Set Quality page has a couple of different ways to help you find ignored fields and investigate issues. From the data set table, you can open the data set’s details page, and view commonly ignored fields and information about those fields. Open a logs data set in Logs Explorer or other data set types in Discover to find ignored fields in individual documents. - - -### Find ignored fields in data sets [find-ignored-fields-in-data-sets] - -To open the details page for a data set with poor or degraded quality and view ignored fields: - -1. From the data set table, click ![expand icon](../../../images/observability-expand-icon.png "") next to a data set with poor or degraded quality. -2. From the details page, scroll down to **Quality issues**. - -The **Quality issues** section shows fields that were ignored during ingest, the number of documents that contain ignored fields, and the timestamp of last occurrence of the field being ignored. - - -### Find ignored fields in individual documents [find-ignored-fields-in-individual-logs] - -To use Logs Explorer or Discover to find ignored fields in individual documents: - -1. Find data sets with degraded documents using the **Degraded Docs** column of the data sets table. -2. Click the percentage in the **Degraded Docs** column to open the data set in Logs Explorer or Discover. - -The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon. - -Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue: - -1. Under the **actions** column, click ![expand icon](../../../images/observability-expand-icon.png "") to open the document details. -2. Select the **JSON** tab. -3. Scroll towards the end of the JSON to find the `ignored_field_values`. - -Here, you’ll find all of the `_ignored` fields in the document and their values, which should provide some clues as to why the fields were ignored. diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index f07447757a..3dd318ac38 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -225,7 +225,6 @@ toc: - file: docs-content/serverless/observability-ecs-application-logs.md - file: docs-content/serverless/observability-get-started.md - file: docs-content/serverless/observability-log-monitoring.md - - file: docs-content/serverless/observability-monitor-datasets.md - file: docs-content/serverless/observability-plaintext-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md - file: docs-content/serverless/project-and-management-settings.md @@ -459,7 +458,6 @@ toc: - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - - file: observability-docs/observability/monitor-datasets.md - file: observability-docs/observability/obs-ai-assistant.md - file: observability-docs/observability/observability-get-started.md - file: security-docs/security/index.md diff --git a/solutions/observability/data-set-quality-monitoring.md b/solutions/observability/data-set-quality-monitoring.md index 020e9fce36..887b14b72b 100644 --- a/solutions/observability/data-set-quality-monitoring.md +++ b/solutions/observability/data-set-quality-monitoring.md @@ -4,11 +4,66 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-monitor-datasets.html --- -# Data set quality monitoring +--- +navigation_title: "Data set quality" +--- + +# Data set quality monitoring [observability-monitor-datasets] + + +[beta] + +The **Data Set Quality** page provides an overview of your log, metric, trace, and synthetic data sets. Use this information to get an idea of your overall data set quality and find data sets that contain incorrectly parsed documents. + +To open **Data Set Quality**, find **Stack Management** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). By default, the page only shows log data sets. To see other data set types, select them from the **Type** menu. + +::::{admonition} Requirements +:class: note + +Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index. + +:::: + + +The quality of your data sets is based on the percentage of degraded documents in each data set. A degraded document in a data set contains the [`_ignored`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-ignored-field.md) property because one or more of its fields were ignored during indexing. Fields are ignored for a variety of reasons. For example, when the [`ignore_malformed`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/mapping-ignored-field.md) parameter is set to true, if a document field contains the wrong data type, the malformed field is ignored and the rest of the document is indexed. + +From the data set table, you’ll find information for each data set such as its namespace, when the data set was last active, and the percentage of degraded docs. The percentage of degraded documents determines the data set’s quality according to the following scale: + +* Good (![Good icon](../../../images/serverless-green-dot-icon.png "")): 0% of the documents in the data set are degraded. +* Degraded (![Degraded icon](../../../images/serverless-yellow-dot-icon.png "")): Greater than 0% and up to 3% of the documents in the data set are degraded. +* Poor (![Poor icon](../../../images/serverless-red-dot-icon.png "")): Greater than 3% of the documents in the data set are degraded. + +Opening the details of a specific data set shows the degraded documents history, a summary for the data set, and other details that can help you determine if you need to investigate any issues. + + +## Investigate issues [observability-monitor-datasets-investigate-issues] + +The Data Set Quality page has a couple of different ways to help you find ignored fields and investigate issues. From the data set table, you can open the data set’s details page, and view commonly ignored fields and information about those fields. Open a logs data set in Logs Explorer or other data set types in Discover to find ignored fields in individual documents. + + +### Find ignored fields in data sets [observability-monitor-datasets-find-ignored-fields-in-data-sets] + +To open the details page for a data set with poor or degraded quality and view ignored fields: + +1. From the data set table, click ![expand icon](../../../images/serverless-expand.svg "") next to a data set with poor or degraded quality. +2. From the details, scroll down to **Quality issues**. + +The **Quality issues** section shows fields that have been ignored, the number of documents that contain ignored fields, and the timestamp of last occurrence of the field being ignored. + + +### Find ignored fields in individual logs [observability-monitor-datasets-find-ignored-fields-in-individual-logs] + +To use Logs Explorer or Discover to find ignored fields in individual logs: + +1. Find data sets with degraded documents using the **Degraded Docs** column of the data sets table. +2. Click the percentage in the **Degraded Docs** column to open the data set in Logs Explorer or Discover. + +The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon (![degraded document icon](../../../images/serverless-indexClose.svg "")). -% What needs to be done: Align serverless/stateful +Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue: -% Use migrated content from existing pages that map to this page: +1. Under the **actions** column, click ![expand icon](../../../images/serverless-expand.svg "") to open the document details. +2. Select the **JSON** tab. +3. Scroll towards the end of the JSON to find the `ignored_field_values`. -% - [ ] ./raw-migrated-files/observability-docs/observability/monitor-datasets.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-monitor-datasets.md \ No newline at end of file +Here, you’ll find all of the `_ignored` fields in the document and their values, which should provide some clues as to why the fields were ignored. \ No newline at end of file From c3729ca014e8ea763ef6588d396d9515a38ea0e2 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 16:50:56 -0600 Subject: [PATCH 15/23] add obs get started --- .../serverless/observability-get-started.md | 67 ---------------- .../observability-get-started.md | 72 ----------------- raw-migrated-files/toc.yml | 2 - solutions/observability/get-started.md | 77 +++++++++++++++++-- 4 files changed, 70 insertions(+), 148 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-get-started.md delete mode 100644 raw-migrated-files/observability-docs/observability/observability-get-started.md diff --git a/raw-migrated-files/docs-content/serverless/observability-get-started.md b/raw-migrated-files/docs-content/serverless/observability-get-started.md deleted file mode 100644 index 9d2b34c58a..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-get-started.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -navigation_title: "Get started" ---- - -# Get started with {{obs-serverless}} [observability-get-started] - - -New to Elastic {{observability}}? Discover more about our observability features and how to get started. - - -## Learn about Elastic {{observability}} [_learn_about_elastic_observability] - -Learn about key features available to help you get value from your observability data and what it will cost you: - -* [Observability overview](../../../solutions/observability/get-started/what-is-elastic-observability.md) -* [{{obs-serverless}} billing dimensions](../../../deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md) - - -## Get started with your use case [get-started-with-use-case] - -Learn how to create an Observability project and use Elastic Observability to gain deeper insight into the behavior of your applications and systems. - -:::{image} ../../../images/serverless-get-started.svg -:alt: get started -::: - -1. **Choose your source.** Elastic integrates with hundreds of data sources for unified visibility across all your applications and systems. -2. **Ingest your data.** Turn-key integrations provide a repeatable workflow to ingest data from all your sources: you install an integration, configure it, and deploy an agent to collect your data. -3. **View your data.** Navigate seamlessly between Observabilty UIs and dashboards to identify and resolve problems quickly. -4. **Customize.** Expand your deployment and add features like alerting and anomaly detection. - -To get started, [create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md), then follow one of our [quickstarts](../../../solutions/observability/get-started.md#quickstarts-overview) to learn how to ingest and visualize your observability data. - - -### Quickstarts [quickstarts-overview] - -Our quickstarts dramatically reduce your time-to-value by offering a fast path to ingest and visualize your Observability data. Each quickstart provides: - -* A highly opinionated, fast path to data ingestion -* Sensible configuration defaults with minimal configuration required -* Auto-detection of logs and metrics for monitoring hosts -* Quick access to related dashboards and visualizations - -Follow the steps in these guides to get started quickly: - -* [Quickstart: Monitor hosts with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) -* [Quickstart: Monitor your Kubernetes cluster with Elastic Agent](../../../solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) -* [Quickstart: Monitor hosts with OpenTelemetry](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md) -* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](../../../solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md) -* [Quickstart: Collect data with AWS Firehose](../../../solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md) - - -### Get started with other features [_get_started_with_other_features] - -Want to use {{fleet}} or some other feature not covered in the quickstarts? Follow the steps in these guides to get started: - -* [Get started with system metrics](../../../solutions/observability/logs/get-started-with-system-logs.md) -* [Get started with application traces and APM](../../../solutions/observability/apps/get-started-with-apm.md) -* [Get started with synthetic monitoring](../../../solutions/observability/apps/get-started.md) - - -## Additional guides [_additional_guides] - -Ready to dig into more features of {{obs-serverless}}? See these guides: - -* [Alerting](../../../solutions/observability/incident-management/alerting.md) -* [Service-level objectives (SLOs)](../../../solutions/observability/incident-management/service-level-objectives-slos.md) diff --git a/raw-migrated-files/observability-docs/observability/observability-get-started.md b/raw-migrated-files/observability-docs/observability/observability-get-started.md deleted file mode 100644 index 5dfe1fb344..0000000000 --- a/raw-migrated-files/observability-docs/observability/observability-get-started.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -navigation_title: "Get started" ---- - -# Get started with Elastic Observability [observability-get-started] - - -New to Elastic {{observability}}? Discover more about our observability features and how to get started. - - -## Learn about Elastic {{observability}} [_learn_about_elastic_observability] - -Learn about key features available to help you get value from your observability data: - -* [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) -* [What’s new in 9.0](https://www.elastic.co/guide/en/observability/current/whats-new.html) - - -## Get started with your use case [get-started-with-use-case] - -Learn how to spin up a deployment of our hosted {{ess}} and use Elastic Observability to gain deeper insight into the behavior of your applications and systems. - -:::{image} ../../../images/observability-get-started.svg -:alt: get started -::: - -1. **Choose your source.** Elastic integrates with hundreds of data sources for unified visibility across all your applications and systems. -2. **Ingest your data.** Turn-key integrations provide a repeatable workflow to ingest data from all your sources: you install an integration, configure it, and deploy an agent to collect your data. -3. **View your data.** Navigate seamlessly between Observabilty UIs and dashboards to identify and resolve problems quickly. -4. **Customize.** Expand your deployment and add features like alerting and anomaly detection. - - -### Quickstarts [quickstarts-overview] - -Our quickstarts dramatically reduce your time-to-value by offering a fast path to ingest and visualize your Observability data. Each quickstart provides: - -* A highly opinionated, fast path to data ingestion -* Sensible configuration defaults with minimal configuration required -* Auto-detection of logs and metrics for monitoring hosts -* Quick access to related dashboards and visualizations - -Follow the steps in these guides to get started quickly: - -* [Quickstart: Monitor hosts with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) -* [Quickstart: Monitor your Kubernetes cluster with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) -* [Quickstart: Monitor hosts with OpenTelemetry](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md) -* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](../../../solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md) -* [Quickstart: Collect data with AWS Firehose](../../../solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md) - - -### Get started with other features [_get_started_with_other_features] - -Want to use {{fleet}} or some other feature not covered in the quickstarts? Follow the steps in these guides to get started: - -* [Get started with system metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md) -* [Get started with application traces and APM](../../../solutions/observability/apps/fleet-managed-apm-server.md) -* [Get started with synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) -* [Get started with Universal Profiling](../../../solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md) - - -## Additional guides [_additional_guides] - -Ready to dig into more features of Elastic Observability? See these guides: - -* [Create an alert](../../../solutions/observability/incident-management/alerting.md) -* [Create a service-level objective (SLO)](../../../solutions/observability/incident-management/create-an-slo.md) - - -## Related content [_related_content] - -* [Starting with the {{es}} Platform and its Solutions](/get-started/index.md) for new users -* [Adding data to {{es}}](../../../manage-data/ingest.md) for other ways to ingest data diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 3dd318ac38..82179fc404 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -223,7 +223,6 @@ toc: - file: docs-content/serverless/observability-apm-get-started.md - file: docs-content/serverless/observability-apm-traces.md - file: docs-content/serverless/observability-ecs-application-logs.md - - file: docs-content/serverless/observability-get-started.md - file: docs-content/serverless/observability-log-monitoring.md - file: docs-content/serverless/observability-plaintext-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md @@ -459,7 +458,6 @@ toc: - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - file: observability-docs/observability/obs-ai-assistant.md - - file: observability-docs/observability/observability-get-started.md - file: security-docs/security/index.md - file: stack-docs/elastic-stack/index.md children: diff --git a/solutions/observability/get-started.md b/solutions/observability/get-started.md index 5710e27b7f..12e263b25a 100644 --- a/solutions/observability/get-started.md +++ b/solutions/observability/get-started.md @@ -5,15 +5,78 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/index.html --- -# Get started +--- +navigation_title: "Get started" +--- + +# Get started with Elastic Observability [observability-get-started] + + +New to Elastic {{observability}}? Discover more about our observability features and how to get started. + + +## Learn about Elastic {{observability}} [_learn_about_elastic_observability] + +Learn about key features available to help you get value from your observability data: + +* [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) +* [What’s new in Elastic Stack v9.0](https://www.elastic.co/guide/en/observability/current/whats-new.html) +* [{{obs-serverless}} billing dimensions](../../../deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md) + + +## Get started with your use case [get-started-with-use-case] + +Learn how to spin up a deployment of our hosted {{ess}} or create an Observability serverless project and use Elastic Observability to gain deeper insight into the behavior of your applications and systems. + +:::{image} ../../../images/observability-get-started.svg +:alt: get started +::: + +1. **Choose your source.** Elastic integrates with hundreds of data sources for unified visibility across all your applications and systems. +2. **Ingest your data.** Turn-key integrations provide a repeatable workflow to ingest data from all your sources: you install an integration, configure it, and deploy an agent to collect your data. +3. **View your data.** Navigate seamlessly between Observabilty UIs and dashboards to identify and resolve problems quickly. +4. **Customize.** Expand your deployment and add features like alerting and anomaly detection. + +To get started with on serverless, [create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md), then follow one of our [quickstarts](../../../solutions/observability/get-started.md#quickstarts-overview) to learn how to ingest and visualize your observability data. + +### Quickstarts [quickstarts-overview] + +Our quickstarts dramatically reduce your time-to-value by offering a fast path to ingest and visualize your Observability data. Each quickstart provides: + +* A highly opinionated, fast path to data ingestion +* Sensible configuration defaults with minimal configuration required +* Auto-detection of logs and metrics for monitoring hosts +* Quick access to related dashboards and visualizations + +Follow the steps in these guides to get started quickly: + +* [Quickstart: Monitor hosts with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) +* [Quickstart: Monitor your Kubernetes cluster with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) +* [Quickstart: Monitor hosts with OpenTelemetry](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md) +* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](../../../solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md) +* [Quickstart: Collect data with AWS Firehose](../../../solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md) + + +### Get started with other features [_get_started_with_other_features] + +Want to use {{fleet}} or some other feature not covered in the quickstarts? Follow the steps in these guides to get started: + +% Stateful only for Universal profiling + +* [Get started with system metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md) +* [Get started with application traces and APM](../../../solutions/observability/apps/fleet-managed-apm-server.md) +* [Get started with synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) +* [Get started with Universal Profiling](../../../solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md) + -% What needs to be done: Align serverless/stateful +## Additional guides [_additional_guides] -% Use migrated content from existing pages that map to this page: +Ready to dig into more features of Elastic Observability? See these guides: -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-get-started.md -% - [ ] ./raw-migrated-files/observability-docs/observability/observability-get-started.md +* [Create an alert](../../../solutions/observability/incident-management/alerting.md) +* [Create a service-level objective (SLO)](../../../solutions/observability/incident-management/create-an-slo.md) -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +## Related content for Elastic Stack v9.0 [_related_content] -$$$quickstarts-overview$$$ \ No newline at end of file +* [Starting with the {{es}} Platform and its Solutions](/get-started/index.md) for new users +* [Adding data to {{es}}](../../../manage-data/ingest.md) for other ways to ingest data \ No newline at end of file From 259ebf794e7e599948192992510b310de2278e8b Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 16:55:15 -0600 Subject: [PATCH 16/23] add incident management --- .../docs-content/serverless/incident-management.md | 13 ------------- .../observability/incident-management.md | 13 ------------- raw-migrated-files/toc.yml | 2 -- solutions/observability/incident-management.md | 13 +++++++------ 4 files changed, 7 insertions(+), 34 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/incident-management.md delete mode 100644 raw-migrated-files/observability-docs/observability/incident-management.md diff --git a/raw-migrated-files/docs-content/serverless/incident-management.md b/raw-migrated-files/docs-content/serverless/incident-management.md deleted file mode 100644 index eb52a28187..0000000000 --- a/raw-migrated-files/docs-content/serverless/incident-management.md +++ /dev/null @@ -1,13 +0,0 @@ -# Incident management [incident-management] - -Explore the topics in this section to learn how to respond to incidents detected in your {{observability}} data. - -| | | -| --- | --- | -| [Alerting](../../../solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. | -| [Cases](../../../solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. | -| [Service-level objectives (SLOs)](../../../solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. | - - - - diff --git a/raw-migrated-files/observability-docs/observability/incident-management.md b/raw-migrated-files/observability-docs/observability/incident-management.md deleted file mode 100644 index eb52a28187..0000000000 --- a/raw-migrated-files/observability-docs/observability/incident-management.md +++ /dev/null @@ -1,13 +0,0 @@ -# Incident management [incident-management] - -Explore the topics in this section to learn how to respond to incidents detected in your {{observability}} data. - -| | | -| --- | --- | -| [Alerting](../../../solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. | -| [Cases](../../../solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. | -| [Service-level objectives (SLOs)](../../../solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. | - - - - diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 82179fc404..8615df6c2e 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -209,7 +209,6 @@ toc: - file: docs-content/serverless/general-ml-nlp-auto-scale.md - file: docs-content/serverless/general-serverless-status.md - file: docs-content/serverless/general-sign-up-trial.md - - file: docs-content/serverless/incident-management.md - file: docs-content/serverless/index-management.md - file: docs-content/serverless/infrastructure-and-host-monitoring-intro.md - file: docs-content/serverless/ingest-aws-securityhub-data.md @@ -453,7 +452,6 @@ toc: - file: observability-docs/observability/apm-agents.md - file: observability-docs/observability/apm-getting-started-apm-server.md - file: observability-docs/observability/apm-traces.md - - file: observability-docs/observability/incident-management.md - file: observability-docs/observability/index.md - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md diff --git a/solutions/observability/incident-management.md b/solutions/observability/incident-management.md index 9e3994ab73..939a4a6c69 100644 --- a/solutions/observability/incident-management.md +++ b/solutions/observability/incident-management.md @@ -4,11 +4,12 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/incident-management.html --- -# Incident management +# Incident management [incident-management] -% What needs to be done: Align serverless/stateful +Explore the topics in this section to learn how to respond to incidents detected in your {{observability}} data. -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/observability-docs/observability/incident-management.md -% - [ ] ./raw-migrated-files/docs-content/serverless/incident-management.md \ No newline at end of file +| | | +| --- | --- | +| [Alerting](../../../solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. | +| [Cases](../../../solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. | +| [Service-level objectives (SLOs)](../../../solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. | \ No newline at end of file From acab52c45aae6c60ebcf1411fe227632f77a44ad Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 16:58:08 -0600 Subject: [PATCH 17/23] and infra and host intro --- ...nfrastructure-and-host-monitoring-intro.md | 18 -------------- ...nfrastructure-and-host-monitoring-intro.md | 24 ------------------- raw-migrated-files/toc.yml | 2 -- solutions/observability/infra-and-hosts.md | 18 ++++++++++---- 4 files changed, 13 insertions(+), 49 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/infrastructure-and-host-monitoring-intro.md delete mode 100644 raw-migrated-files/observability-docs/observability/infrastructure-and-host-monitoring-intro.md diff --git a/raw-migrated-files/docs-content/serverless/infrastructure-and-host-monitoring-intro.md b/raw-migrated-files/docs-content/serverless/infrastructure-and-host-monitoring-intro.md deleted file mode 100644 index 8a5ae66d5a..0000000000 --- a/raw-migrated-files/docs-content/serverless/infrastructure-and-host-monitoring-intro.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -navigation_title: "Infrastructure and hosts" ---- - -# Infrastructure and host monitoring [infrastructure-and-host-monitoring-intro] - - -Explore the topics in this section to learn how to observe and monitor hosts and other systems running in your environment. - -| | | -| --- | --- | -| [Analyze infrastructure and host metrics](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. | -| [Troubleshooting](../../../troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. | -| [Metrics reference](asciidocalypse://docs/docs-content/docs/reference/data-analysis/observability/metrics-reference-serverless.md) | Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. | - - - - diff --git a/raw-migrated-files/observability-docs/observability/infrastructure-and-host-monitoring-intro.md b/raw-migrated-files/observability-docs/observability/infrastructure-and-host-monitoring-intro.md deleted file mode 100644 index 62cc2e40ff..0000000000 --- a/raw-migrated-files/observability-docs/observability/infrastructure-and-host-monitoring-intro.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -navigation_title: "Infrastructure and hosts" ---- - -# Infrastructure and host monitoring [infrastructure-and-host-monitoring-intro] - - -Explore the topics in this section to learn how to observe and monitor hosts and other systems running in your environment. - -| | | -| --- | --- | -| [Analyze infrastructure and host metrics](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. | -| [Universal Profiling](../../../solutions/observability/infra-and-hosts/universal-profiling.md) | Profile all the code running on a machine, including application code, kernel, and third-party libraries. | -| [Tutorial: Observe your Kubernetes deployments](../../../solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md) | Observe all layers of your application, including the orchestration software itself. | -| [Tutorial: Observe your nginx instances](../../../solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md) | Collect valuable metrics and logs from your nginx instances. | -| [Troubleshooting](../../../troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. | -| [Metrics reference](asciidocalypse://docs/docs-content/docs/reference/data-analysis/observability/metrics-reference.md) | Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. | - - - - - - - diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 8615df6c2e..77eff47f7f 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -210,7 +210,6 @@ toc: - file: docs-content/serverless/general-serverless-status.md - file: docs-content/serverless/general-sign-up-trial.md - file: docs-content/serverless/index-management.md - - file: docs-content/serverless/infrastructure-and-host-monitoring-intro.md - file: docs-content/serverless/ingest-aws-securityhub-data.md - file: docs-content/serverless/ingest-falco.md - file: docs-content/serverless/ingest-third-party-cloud-security-data.md @@ -453,7 +452,6 @@ toc: - file: observability-docs/observability/apm-getting-started-apm-server.md - file: observability-docs/observability/apm-traces.md - file: observability-docs/observability/index.md - - file: observability-docs/observability/infrastructure-and-host-monitoring-intro.md - file: observability-docs/observability/logs-checklist.md - file: observability-docs/observability/obs-ai-assistant.md - file: security-docs/security/index.md diff --git a/solutions/observability/infra-and-hosts.md b/solutions/observability/infra-and-hosts.md index 80466b1a00..f5205a9fa5 100644 --- a/solutions/observability/infra-and-hosts.md +++ b/solutions/observability/infra-and-hosts.md @@ -2,13 +2,21 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/infrastructure-and-host-monitoring-intro.html - https://www.elastic.co/guide/en/serverless/current/infrastructure-and-host-monitoring-intro.html + +navigation_title: "Infrastructure and hosts" --- -# Infrastructure and hosts +# Infrastructure and host monitoring [infrastructure-and-host-monitoring-intro] -% What needs to be done: Align serverless/stateful +% Stateful only for Profiling, Tutorials, Metrics reference. -% Use migrated content from existing pages that map to this page: +Explore the topics in this section to learn how to observe and monitor hosts and other systems running in your environment. -% - [ ] ./raw-migrated-files/observability-docs/observability/infrastructure-and-host-monitoring-intro.md -% - [ ] ./raw-migrated-files/docs-content/serverless/infrastructure-and-host-monitoring-intro.md \ No newline at end of file +| | | +| --- | --- | +| [Analyze infrastructure and host metrics](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. | +| [Universal Profiling](../../../solutions/observability/infra-and-hosts/universal-profiling.md) | Profile all the code running on a machine, including application code, kernel, and third-party libraries. | +| [Tutorial: Observe your Kubernetes deployments](../../../solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md) | Observe all layers of your application, including the orchestration software itself. | +| [Tutorial: Observe your nginx instances](../../../solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md) | Collect valuable metrics and logs from your nginx instances. | +| [Troubleshooting](../../../troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. | +| [Metrics reference](asciidocalypse://docs/docs-content/docs/reference/data-analysis/observability/metrics-reference.md) | Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. | \ No newline at end of file From 07789eca4ceb7102f2af5b87b416fd4147537795 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 17:08:28 -0600 Subject: [PATCH 18/23] add logs intro --- .../observability-log-monitoring.md | 94 ------------- .../observability/logs-checklist.md | 130 ------------------ solutions/observability/logs.md | 106 +++++++++++++- .../logs-index-template-reference.md | 0 solutions/toc.yml | 2 +- 5 files changed, 102 insertions(+), 230 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-log-monitoring.md delete mode 100644 raw-migrated-files/observability-docs/observability/logs-checklist.md rename solutions/observability/{unknown-bucket => logs}/logs-index-template-reference.md (100%) diff --git a/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md b/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md deleted file mode 100644 index 2e31584a88..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -navigation_title: "Logs" ---- - -# Log monitoring [observability-log-monitoring] - - -{{obs-serverless}} allows you to deploy and manage logs at a petabyte scale, giving you insights into your logs in minutes. You can also search across your logs in one place, troubleshoot in real time, and detect patterns and outliers with categorization and anomaly detection. For more information, refer to the following links: - -* [Get started with system logs](../../../solutions/observability/logs/get-started-with-system-logs.md): Onboard system log data from a machine or server. -* [Stream any log file](../../../solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}. -* [Parse and route logs](../../../solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data. -* [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. -* [Explore logs](../../../solutions/observability/logs/logs-explorer.md): Find information on visualizing and analyzing logs. -* [Run pattern analysis on log data](../../../solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. -* [Troubleshoot logs](../../../troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. - - -## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project] - -You can send logs data to your project in different ways depending on your needs: - -* {agent} -* {filebeat} - -When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](../../../manage-data/ingest/tools.md) for more information on which option best fits your situation. - - -### {{agent}} [observability-log-monitoring-agent] - -{{agent}} uses [integrations](https://www.elastic.co/integrations/data-integrations) to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an {{agent}}: - - -#### {{fleet}}-managed {{agent}} [observability-log-monitoring-fleet-managed-agent] - -Install an {{agent}} and use {{fleet}} to define, configure, and manage your agents in a central location. - -See [install {{fleet}}-managed {{agent}}](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). - - -#### Standalone {{agent}} [observability-log-monitoring-standalone-agent] - -Install an {{agent}} and manually configure it locally on the system where it’s installed. You are responsible for managing and upgrading the agents. - -See [install standalone {{agent}}](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md). - - -#### {{agent}} in a containerized environment [observability-log-monitoring-agent-in-a-containerized-environment] - -Run an {{agent}} inside of a container — either with {{fleet-server}} or standalone. - -See [install {{agent}} in containers](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md). - - -### {{filebeat}} [observability-log-monitoring-filebeat] - -{{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing. - -* [{{filebeat}} overview](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md): General information on {{filebeat}} and how it works. -* [{{filebeat}} quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md): Basic installation instructions to get you started. -* [Set up and run {{filebeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/setting-up-running.md): Information on how to install, set up, and run {{filebeat}}. - - -## Configure logs [observability-log-monitoring-configure-logs] - -The following resources provide information on configuring your logs: - -* [Data streams](../../../manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size. -* [Data views](../../../explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces. -* [Index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements. -* [Ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing. -* [Mapping](../../../manage-data/data-store/mapping.md): Define how data is stored and indexed. - - -## View and monitor logs [observability-log-monitoring-view-and-monitor-logs] - -Use **Logs Explorer** to search, filter, and tail all your logs ingested into your project in one place. - -The following resources provide information on viewing and monitoring your logs: - -* [Discover and explore](../../../solutions/observability/logs/logs-explorer.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. -* [Detect log anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically. - - -## Monitor data sets [observability-log-monitoring-monitor-data-sets] - -The **Data Set Quality** page provides an overview of your data sets and their quality. Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents. - -[Monitor data sets](../../../solutions/observability/data-set-quality-monitoring.md) - - -## Application logs [observability-log-monitoring-application-logs] - -Application logs provide valuable insight into events that have occurred within your services and applications. See [Application logs](../../../solutions/observability/logs/stream-application-logs.md). diff --git a/raw-migrated-files/observability-docs/observability/logs-checklist.md b/raw-migrated-files/observability-docs/observability/logs-checklist.md deleted file mode 100644 index 0d6cb96404..0000000000 --- a/raw-migrated-files/observability-docs/observability/logs-checklist.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -navigation_title: "Logs" ---- - -# Log monitoring [logs-checklist] - - -Logs are an important tool for ensuring the performance and reliability of your applications and infrastructure. They provide important information for debugging, analyzing performance, and managing compliance. - -On this page, you’ll find resources for sending log data to {{es}}, configuring your logs, and analyzing your logs. - - -## Get started with logs [logs-getting-started-checklist] - -For a high-level overview on ingesting, viewing, and analyzing logs with Elastic, refer to [Get started with logs and metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md). - -To get started ingesting, parsing, and filtering your own data, refer to these pages: - -* **[Stream any log file](../../../solutions/observability/logs/stream-any-log-file.md)**: send log files from your system to {{es}} using a standalone {{agent}} and configure the {{agent}} and your data streams using the `elastic-agent.yml` file. -* **[Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md)**: break your log messages into meaningful fields that you can use to filter and analyze your data. -* **[Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md)**: find specific information in your log data to gain insight and monitor your systems. - -The following sections provide resources to important concepts or advanced use cases for working with your logs. - - -## Send log data to {{es}} [logs-send-data-checklist] - -You can send log data to {{es}} in different ways depending on your needs: - -* **{{agent}}**: a single agent for logs, metrics, security data, and threat prevention. It can be deployed either standalone or managed by {{fleet}}: - - * **Standalone**: Manually configure, deploy and update an {{agent}} on each host. - * **Fleet**: Centrally manage and update {{agent}} policies and lifecycles in {{kib}}. - -* **{{filebeat}}**: a lightweight, logs-specific shipper for forwarding and centralizing log data. - -Refer to the [{{agent}} and {{beats}} capabilities comparison](../../../manage-data/ingest/tools.md) for more information on which option best fits your situation. - - -### Install {{agent}} [agent-ref-guide] - -The following pages detail installing and managing the {{agent}} in different modes. - -* **Standalone {{agent}}** - - Install an {{agent}} and manually configure it locally on the system where it’s installed. You are responsible for managing and upgrading the agents. - - Refer to [Stream any log file](../../../solutions/observability/logs/stream-any-log-file.md) to learn how to send a log file to {{es}} using a standalone {{agent}} and configure the {{agent}} and your data streams using the `elastic-agent.yml` file. - -* **{{fleet}}-managed {{agent}}** - - Install an {{agent}} and use {{fleet}} in {{kib}} to define, configure, and manage your agents in a central location. - - Refer to [install {{fleet}}-managed {{agent}}](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). - -* **{{agent}} in a containerized environment** - - Run an {{agent}} inside of a container—either with {{fleet-server}} or standalone. - - Refer to [install {{agent}} in a containerized environment](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md). - - - -### Install {{filebeat}} [beats-ref-guide] - -{{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them either to [{{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current) or [Logstash](https://www.elastic.co/guide/en/logstash/current) for indexing. - -* [{{filebeat}} overview](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md): general information on {{filebeat}} and how it works. -* [{{filebeat}} quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md): basic installation instructions to get you started. -* [Set up and run {{filebeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/setting-up-running.md): information on how to install, set up, and run {{filebeat}}. - - -## Parse and organize your logs [logs-configure-data-checklist] - -To get started parsing and organizing your logs, refer to [Parse and organize logs](../../../solutions/observability/logs/parse-route-logs.md) for information on breaking unstructured log data into meaningful fields you can use to filter and aggregate your data. - -The following resources provide information on important concepts related to parsing and organizing your logs: - -* [Data streams](../../../manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size. -* [Data views](../../../explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces. -* [Index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements. -* [Ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing. -* [Mapping](../../../manage-data/data-store/mapping.md): define how data is stored and indexed. - - -## View and monitor logs [logs-monitor-checklist] - -With the {{logs-app}} in {{kib}} you can search, filter, and tail all your logs ingested into {{es}} in one place. - -The following resources provide information on viewing and monitoring your logs: - -* [Logs Explorer](../../../solutions/observability/logs/logs-explorer.md): monitor all of your log events flowing in from your servers, virtual machines, and containers in a centralized view. -* [Inspect log anomalies](../../../solutions/observability/logs/inspect-log-anomalies.md): use {{ml}} to detect log anomalies automatically. -* [Categorize log entries](../../../solutions/observability/logs/categorize-log-entries.md): use {{ml}} to categorize log messages to quickly identify patterns in your log events. -* [Configure data sources](../../../solutions/observability/logs/configure-data-sources.md): Specify the source configuration for logs in the Logs app settings in the Kibana configuration file. - - -## Monitor Kubernetes logs [logs-checklist-k8s] - -You can use the {{agent}} with the Kubernetes integration to collect and parse Kubernetes logs. Refer to [Monitor Kubernetes](../../../solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md). - - -## View and monitor application logs [logs-app-checklist] - -Application logs provide valuable insight into events that have occurred within your services and applications. - -Refer to [Stream application logs](../../../solutions/observability/logs/stream-application-logs.md). - - -## Create a log threshold alert [logs-alerts-checklist] - -You can create a rule to send an alert when the log aggregation exceeds a threshold. - -Refer to [Log threshold](../../../solutions/observability/incident-management/create-log-threshold-rule.md). - - -## Configure the default logs template [logs-template-checklist] - -Configure the default `logs` template using the `logs@custom` component template. - -Refer to the [Logs index template reference](../../../solutions/observability/unknown-bucket/logs-index-template-reference.md). - - - - - - - - - diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md index 75b5e51ba9..e1b3e97aee 100644 --- a/solutions/observability/logs.md +++ b/solutions/observability/logs.md @@ -2,13 +2,109 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/logs-checklist.html - https://www.elastic.co/guide/en/serverless/current/observability-log-monitoring.html + +navigation_title: "Logs" --- -# Logs +# Log monitoring [logs-checklist] + +Elastic Observability allows you to deploy and manage logs at a petabyte scale, giving you insights into your logs in minutes. You can also search across your logs in one place, troubleshoot in real time, and detect patterns and outliers with categorization and anomaly detection. For more information, refer to the following links: + +* [Get started with system logs](../../../solutions/observability/logs/get-started-with-system-logs.md): Onboard system log data from a machine or server. +* [Stream any log file](../../../solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}. +* [Parse and route logs](../../../solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data. +* [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. +* [Explore logs](../../../solutions/observability/logs/logs-explorer.md): Find information on visualizing and analyzing logs. +* [Run pattern analysis on log data](../../../solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. +* [Troubleshoot logs](../../../troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. + + +## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project] + +You can send logs data to your project in different ways depending on your needs: + +* {agent} +* {filebeat} + +When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](../../../manage-data/ingest/tools.md) for more information on which option best fits your situation. + + +### {{agent}} [observability-log-monitoring-agent] + +{{agent}} uses [integrations](https://www.elastic.co/integrations/data-integrations) to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an {{agent}}: + + +#### {{fleet}}-managed {{agent}} [observability-log-monitoring-fleet-managed-agent] + +Install an {{agent}} and use {{fleet}} to define, configure, and manage your agents in a central location. + +See [install {{fleet}}-managed {{agent}}](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md). + + +#### Standalone {{agent}} [observability-log-monitoring-standalone-agent] + +Install an {{agent}} and manually configure it locally on the system where it’s installed. You are responsible for managing and upgrading the agents. + +See [install standalone {{agent}}](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-standalone-elastic-agent.md). + + +#### {{agent}} in a containerized environment [observability-log-monitoring-agent-in-a-containerized-environment] + +Run an {{agent}} inside of a container — either with {{fleet-server}} or standalone. + +See [install {{agent}} in containers](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/install-elastic-agents-in-containers.md). + + +### {{filebeat}} [observability-log-monitoring-filebeat] + +{{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing. + +* [{{filebeat}} overview](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md): General information on {{filebeat}} and how it works. +* [{{filebeat}} quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md): Basic installation instructions to get you started. +* [Set up and run {{filebeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/setting-up-running.md): Information on how to install, set up, and run {{filebeat}}. + + +## Configure logs [observability-log-monitoring-configure-logs] + +The following resources provide information on configuring your logs: + +* [Data streams](../../../manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size. +* [Data views](../../../explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces. +* [Index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements. +* [Ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing. +* [Mapping](../../../manage-data/data-store/mapping.md): Define how data is stored and indexed. + + +## View and monitor logs [observability-log-monitoring-view-and-monitor-logs] + +Use **Logs Explorer** to search, filter, and tail all your logs ingested into your project in one place. + +The following resources provide information on viewing and monitoring your logs: + +* [Discover and explore](../../../solutions/observability/logs/logs-explorer.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. +* [Detect log anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically. + + +## Monitor data sets [observability-log-monitoring-monitor-data-sets] + +The **Data Set Quality** page provides an overview of your data sets and their quality. Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents. + +[Monitor data sets](../../../solutions/observability/data-set-quality-monitoring.md) + + +## Application logs [observability-log-monitoring-application-logs] + +Application logs provide valuable insight into events that have occurred within your services and applications. See [Application logs](../../../solutions/observability/logs/stream-application-logs.md). + +## Log threshold alert [logs-alerts-checklist] + +You can create a rule to send an alert when the log aggregation exceeds a threshold. + +Refer to [Log threshold](../../../solutions/observability/incident-management/create-log-threshold-rule.md). + -% What needs to be done: Align serverless/stateful +## Default logs template [logs-template-checklist] -% Use migrated content from existing pages that map to this page: +Configure the default `logs` template using the `logs@custom` component template. -% - [ ] ./raw-migrated-files/observability-docs/observability/logs-checklist.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-log-monitoring.md \ No newline at end of file +Refer to the [Logs index template reference](../../../solutions/observability/logs/logs-index-template-reference.md). \ No newline at end of file diff --git a/solutions/observability/unknown-bucket/logs-index-template-reference.md b/solutions/observability/logs/logs-index-template-reference.md similarity index 100% rename from solutions/observability/unknown-bucket/logs-index-template-reference.md rename to solutions/observability/logs/logs-index-template-reference.md diff --git a/solutions/toc.yml b/solutions/toc.yml index aee610db8a..39b1fcf815 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -328,6 +328,7 @@ toc: - file: observability/logs/logs-stream.md - file: observability/logs/run-pattern-analysis-on-log-data.md - file: observability/logs/add-service-name-to-logs.md + - file: observability/unknown-bucket/logs-index-template-reference.md - file: observability/incident-management.md children: - file: observability/incident-management/alerting.md @@ -374,7 +375,6 @@ toc: - file: observability/unknown-bucket/container-metrics.md - file: observability/unknown-bucket/kubernetes-pod-metrics.md - file: observability/unknown-bucket/aws-metrics.md - - file: observability/unknown-bucket/logs-index-template-reference.md - file: security.md children: - file: security/elastic-security-serverless.md From eeeb1cb0efcc736c833db9055fbd9d86bad3da71 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 17:15:47 -0600 Subject: [PATCH 19/23] fix headings --- solutions/observability/data-set-quality-monitoring.md | 2 -- solutions/observability/get-started.md | 2 -- 2 files changed, 4 deletions(-) diff --git a/solutions/observability/data-set-quality-monitoring.md b/solutions/observability/data-set-quality-monitoring.md index 887b14b72b..05a86d92b5 100644 --- a/solutions/observability/data-set-quality-monitoring.md +++ b/solutions/observability/data-set-quality-monitoring.md @@ -2,9 +2,7 @@ mapped_urls: - https://www.elastic.co/guide/en/observability/current/monitor-datasets.html - https://www.elastic.co/guide/en/serverless/current/observability-monitor-datasets.html ---- ---- navigation_title: "Data set quality" --- diff --git a/solutions/observability/get-started.md b/solutions/observability/get-started.md index 12e263b25a..cd4968cee8 100644 --- a/solutions/observability/get-started.md +++ b/solutions/observability/get-started.md @@ -3,9 +3,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-get-started.html - https://www.elastic.co/guide/en/observability/current/observability-get-started.html - https://www.elastic.co/guide/en/observability/current/index.html ---- ---- navigation_title: "Get started" --- From 5cf8fc3fd05cdcb7b182fbc49000e9d8c23cc5ed Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 17:24:03 -0600 Subject: [PATCH 20/23] fix toc --- raw-migrated-files/toc.yml | 2 -- solutions/toc.yml | 2 +- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 32c1c1647d..644830e616 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -219,7 +219,6 @@ toc: - file: docs-content/serverless/observability-apm-get-started.md - file: docs-content/serverless/observability-apm-traces.md - file: docs-content/serverless/observability-ecs-application-logs.md - - file: docs-content/serverless/observability-log-monitoring.md - file: docs-content/serverless/observability-plaintext-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md - file: docs-content/serverless/project-and-management-settings.md @@ -448,7 +447,6 @@ toc: - file: observability-docs/observability/apm-getting-started-apm-server.md - file: observability-docs/observability/apm-traces.md - file: observability-docs/observability/index.md - - file: observability-docs/observability/logs-checklist.md - file: observability-docs/observability/obs-ai-assistant.md - file: security-docs/security/index.md - file: stack-docs/elastic-stack/index.md diff --git a/solutions/toc.yml b/solutions/toc.yml index 39b1fcf815..f523b5a425 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -328,7 +328,7 @@ toc: - file: observability/logs/logs-stream.md - file: observability/logs/run-pattern-analysis-on-log-data.md - file: observability/logs/add-service-name-to-logs.md - - file: observability/unknown-bucket/logs-index-template-reference.md + - file: observability/logs/logs-index-template-reference.md - file: observability/incident-management.md children: - file: observability/incident-management/alerting.md From 8cb6b182566219a2b2b7cc80f5aae5ce3500565d Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 17:32:33 -0600 Subject: [PATCH 21/23] fix links --- solutions/observability/apps.md | 10 ++--- .../data-set-quality-monitoring.md | 10 ++--- solutions/observability/get-started.md | 32 ++++++++-------- .../observability/incident-management.md | 6 +-- solutions/observability/infra-and-hosts.md | 10 ++--- solutions/observability/logs.md | 38 +++++++++---------- 6 files changed, 53 insertions(+), 53 deletions(-) diff --git a/solutions/observability/apps.md b/solutions/observability/apps.md index e7fca6ee2a..407fe157a2 100644 --- a/solutions/observability/apps.md +++ b/solutions/observability/apps.md @@ -15,8 +15,8 @@ Explore the topics in this section to learn how to observe and monitor software | | | | --- | --- | -| [Application performance monitoring (APM)](../../../solutions/observability/apps/application-performance-monitoring-apm.md) | Monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | -| [Synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) | Monitor the availability of network endpoints and services. | -| [Real user monitoring](../../../solutions/observability/apps/real-user-monitoring-user-experience.md) | Quantify and analyze the perceived performance of your web application using real-world user experiences. | -| [Uptime monitoring (deprecated)](../../../solutions/observability/apps/uptime-monitoring-deprecated.md) | Periodically check the status of your services and applications. | -| [Tutorial: Monitor a Java application](../../../solutions/observability/apps/tutorial-monitor-java-application.md) | Monitor a Java application using Elastic Observability: Logs, Infrastructure metrics, APM, and Uptime. | +| [Application performance monitoring (APM)](../../solutions/observability/apps/application-performance-monitoring-apm.md) | Monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | +| [Synthetic monitoring](../../solutions/observability/apps/synthetic-monitoring.md) | Monitor the availability of network endpoints and services. | +| [Real user monitoring](../../solutions/observability/apps/real-user-monitoring-user-experience.md) | Quantify and analyze the perceived performance of your web application using real-world user experiences. | +| [Uptime monitoring (deprecated)](../../solutions/observability/apps/uptime-monitoring-deprecated.md) | Periodically check the status of your services and applications. | +| [Tutorial: Monitor a Java application](../../solutions/observability/apps/tutorial-monitor-java-application.md) | Monitor a Java application using Elastic Observability: Logs, Infrastructure metrics, APM, and Uptime. | diff --git a/solutions/observability/data-set-quality-monitoring.md b/solutions/observability/data-set-quality-monitoring.md index 05a86d92b5..b9581cb7db 100644 --- a/solutions/observability/data-set-quality-monitoring.md +++ b/solutions/observability/data-set-quality-monitoring.md @@ -27,9 +27,9 @@ The quality of your data sets is based on the percentage of degraded documents i From the data set table, you’ll find information for each data set such as its namespace, when the data set was last active, and the percentage of degraded docs. The percentage of degraded documents determines the data set’s quality according to the following scale: -* Good (![Good icon](../../../images/serverless-green-dot-icon.png "")): 0% of the documents in the data set are degraded. -* Degraded (![Degraded icon](../../../images/serverless-yellow-dot-icon.png "")): Greater than 0% and up to 3% of the documents in the data set are degraded. -* Poor (![Poor icon](../../../images/serverless-red-dot-icon.png "")): Greater than 3% of the documents in the data set are degraded. +* Good (![Good icon](../../images/serverless-green-dot-icon.png "")): 0% of the documents in the data set are degraded. +* Degraded (![Degraded icon](../../images/serverless-yellow-dot-icon.png "")): Greater than 0% and up to 3% of the documents in the data set are degraded. +* Poor (![Poor icon](../../images/serverless-red-dot-icon.png "")): Greater than 3% of the documents in the data set are degraded. Opening the details of a specific data set shows the degraded documents history, a summary for the data set, and other details that can help you determine if you need to investigate any issues. @@ -43,7 +43,7 @@ The Data Set Quality page has a couple of different ways to help you find ignore To open the details page for a data set with poor or degraded quality and view ignored fields: -1. From the data set table, click ![expand icon](../../../images/serverless-expand.svg "") next to a data set with poor or degraded quality. +1. From the data set table, click ![expand icon](../../images/serverless-expand.svg "") next to a data set with poor or degraded quality. 2. From the details, scroll down to **Quality issues**. The **Quality issues** section shows fields that have been ignored, the number of documents that contain ignored fields, and the timestamp of last occurrence of the field being ignored. @@ -60,7 +60,7 @@ The **Documents** table in Logs Explorer or Discover is automatically filtered t Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue: -1. Under the **actions** column, click ![expand icon](../../../images/serverless-expand.svg "") to open the document details. +1. Under the **actions** column, click ![expand icon](../../images/serverless-expand.svg "") to open the document details. 2. Select the **JSON** tab. 3. Scroll towards the end of the JSON to find the `ignored_field_values`. diff --git a/solutions/observability/get-started.md b/solutions/observability/get-started.md index cd4968cee8..0dce4663a7 100644 --- a/solutions/observability/get-started.md +++ b/solutions/observability/get-started.md @@ -17,16 +17,16 @@ New to Elastic {{observability}}? Discover more about our observability features Learn about key features available to help you get value from your observability data: -* [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) +* [What is Elastic {{observability}}?](../../solutions/observability/get-started/what-is-elastic-observability.md) * [What’s new in Elastic Stack v9.0](https://www.elastic.co/guide/en/observability/current/whats-new.html) -* [{{obs-serverless}} billing dimensions](../../../deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md) +* [{{obs-serverless}} billing dimensions](../../deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md) ## Get started with your use case [get-started-with-use-case] Learn how to spin up a deployment of our hosted {{ess}} or create an Observability serverless project and use Elastic Observability to gain deeper insight into the behavior of your applications and systems. -:::{image} ../../../images/observability-get-started.svg +:::{image} ../../images/observability-get-started.svg :alt: get started ::: @@ -35,7 +35,7 @@ Learn how to spin up a deployment of our hosted {{ess}} or create an Observabili 3. **View your data.** Navigate seamlessly between Observabilty UIs and dashboards to identify and resolve problems quickly. 4. **Customize.** Expand your deployment and add features like alerting and anomaly detection. -To get started with on serverless, [create an Observability project](../../../solutions/observability/get-started/create-an-observability-project.md), then follow one of our [quickstarts](../../../solutions/observability/get-started.md#quickstarts-overview) to learn how to ingest and visualize your observability data. +To get started with on serverless, [create an Observability project](../../solutions/observability/get-started/create-an-observability-project.md), then follow one of our [quickstarts](../../solutions/observability/get-started.md#quickstarts-overview) to learn how to ingest and visualize your observability data. ### Quickstarts [quickstarts-overview] @@ -48,11 +48,11 @@ Our quickstarts dramatically reduce your time-to-value by offering a fast path t Follow the steps in these guides to get started quickly: -* [Quickstart: Monitor hosts with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) -* [Quickstart: Monitor your Kubernetes cluster with {{agent}}](../../../solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) -* [Quickstart: Monitor hosts with OpenTelemetry](../../../solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md) -* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](../../../solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md) -* [Quickstart: Collect data with AWS Firehose](../../../solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md) +* [Quickstart: Monitor hosts with {{agent}}](../../solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) +* [Quickstart: Monitor your Kubernetes cluster with {{agent}}](../../solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md) +* [Quickstart: Monitor hosts with OpenTelemetry](../../solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md) +* [Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)](../../solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md) +* [Quickstart: Collect data with AWS Firehose](../../solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md) ### Get started with other features [_get_started_with_other_features] @@ -61,20 +61,20 @@ Want to use {{fleet}} or some other feature not covered in the quickstarts? Foll % Stateful only for Universal profiling -* [Get started with system metrics](../../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md) -* [Get started with application traces and APM](../../../solutions/observability/apps/fleet-managed-apm-server.md) -* [Get started with synthetic monitoring](../../../solutions/observability/apps/synthetic-monitoring.md) -* [Get started with Universal Profiling](../../../solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md) +* [Get started with system metrics](../../solutions/observability/infra-and-hosts/get-started-with-system-metrics.md) +* [Get started with application traces and APM](../../solutions/observability/apps/fleet-managed-apm-server.md) +* [Get started with synthetic monitoring](../../solutions/observability/apps/synthetic-monitoring.md) +* [Get started with Universal Profiling](../../solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md) ## Additional guides [_additional_guides] Ready to dig into more features of Elastic Observability? See these guides: -* [Create an alert](../../../solutions/observability/incident-management/alerting.md) -* [Create a service-level objective (SLO)](../../../solutions/observability/incident-management/create-an-slo.md) +* [Create an alert](../../solutions/observability/incident-management/alerting.md) +* [Create a service-level objective (SLO)](../../solutions/observability/incident-management/create-an-slo.md) ## Related content for Elastic Stack v9.0 [_related_content] * [Starting with the {{es}} Platform and its Solutions](/get-started/index.md) for new users -* [Adding data to {{es}}](../../../manage-data/ingest.md) for other ways to ingest data \ No newline at end of file +* [Adding data to {{es}}](../../manage-data/ingest.md) for other ways to ingest data \ No newline at end of file diff --git a/solutions/observability/incident-management.md b/solutions/observability/incident-management.md index 939a4a6c69..0b77224d5e 100644 --- a/solutions/observability/incident-management.md +++ b/solutions/observability/incident-management.md @@ -10,6 +10,6 @@ Explore the topics in this section to learn how to respond to incidents detected | | | | --- | --- | -| [Alerting](../../../solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. | -| [Cases](../../../solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. | -| [Service-level objectives (SLOs)](../../../solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. | \ No newline at end of file +| [Alerting](../../solutions/observability/incident-management/alerting.md) | Trigger alerts when incidents occur, and use built-in connectors to send the alerts to email, slack, or other third-party systems, such as your external incident management application. | +| [Cases](../../solutions/observability/incident-management/cases.md) | Collect and share information about {{observability}} issues by opening cases and optionally sending them to your external incident management application. | +| [Service-level objectives (SLOs)](../../solutions/observability/incident-management/service-level-objectives-slos.md) | Set clear, measurable targets for your service performance, based on factors like availability, response times, error rates, and other key metrics. | \ No newline at end of file diff --git a/solutions/observability/infra-and-hosts.md b/solutions/observability/infra-and-hosts.md index f5205a9fa5..121c2022b9 100644 --- a/solutions/observability/infra-and-hosts.md +++ b/solutions/observability/infra-and-hosts.md @@ -14,9 +14,9 @@ Explore the topics in this section to learn how to observe and monitor hosts and | | | | --- | --- | -| [Analyze infrastructure and host metrics](../../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. | -| [Universal Profiling](../../../solutions/observability/infra-and-hosts/universal-profiling.md) | Profile all the code running on a machine, including application code, kernel, and third-party libraries. | -| [Tutorial: Observe your Kubernetes deployments](../../../solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md) | Observe all layers of your application, including the orchestration software itself. | -| [Tutorial: Observe your nginx instances](../../../solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md) | Collect valuable metrics and logs from your nginx instances. | -| [Troubleshooting](../../../troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. | +| [Analyze infrastructure and host metrics](../../solutions/observability/infra-and-hosts/analyze-infrastructure-host-metrics.md) | Visualize infrastructure metrics to help diagnose problematic spikes, identify high resource utilization, automatically discover and track pods, and unify your metrics with other observability data. | +| [Universal Profiling](../../solutions/observability/infra-and-hosts/universal-profiling.md) | Profile all the code running on a machine, including application code, kernel, and third-party libraries. | +| [Tutorial: Observe your Kubernetes deployments](../../solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md) | Observe all layers of your application, including the orchestration software itself. | +| [Tutorial: Observe your nginx instances](../../solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md) | Collect valuable metrics and logs from your nginx instances. | +| [Troubleshooting](../../troubleshoot/observability/troubleshooting-infrastructure-monitoring.md) | Troubleshoot common issues on your own or ask for help. | | [Metrics reference](asciidocalypse://docs/docs-content/docs/reference/data-analysis/observability/metrics-reference.md) | Learn about the key metrics displayed in the Infrastructure UI and how they are calculated. | \ No newline at end of file diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md index e1b3e97aee..48103ab165 100644 --- a/solutions/observability/logs.md +++ b/solutions/observability/logs.md @@ -10,13 +10,13 @@ navigation_title: "Logs" Elastic Observability allows you to deploy and manage logs at a petabyte scale, giving you insights into your logs in minutes. You can also search across your logs in one place, troubleshoot in real time, and detect patterns and outliers with categorization and anomaly detection. For more information, refer to the following links: -* [Get started with system logs](../../../solutions/observability/logs/get-started-with-system-logs.md): Onboard system log data from a machine or server. -* [Stream any log file](../../../solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}. -* [Parse and route logs](../../../solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data. -* [Filter and aggregate logs](../../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. -* [Explore logs](../../../solutions/observability/logs/logs-explorer.md): Find information on visualizing and analyzing logs. -* [Run pattern analysis on log data](../../../solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. -* [Troubleshoot logs](../../../troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. +* [Get started with system logs](../../solutions/observability/logs/get-started-with-system-logs.md): Onboard system log data from a machine or server. +* [Stream any log file](../../solutions/observability/logs/stream-any-log-file.md): Send log files to your Observability project using a standalone {{agent}}. +* [Parse and route logs](../../solutions/observability/logs/parse-route-logs.md): Parse your log data and extract structured fields that you can use to analyze your data. +* [Filter and aggregate logs](../../solutions/observability/logs/filter-aggregate-logs.md#logs-filter): Filter and aggregate your log data to find specific information, gain insight, and monitor your systems more efficiently. +* [Explore logs](../../solutions/observability/logs/logs-explorer.md): Find information on visualizing and analyzing logs. +* [Run pattern analysis on log data](../../solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. +* [Troubleshoot logs](../../troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. ## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project] @@ -26,7 +26,7 @@ You can send logs data to your project in different ways depending on your needs * {agent} * {filebeat} -When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](../../../manage-data/ingest/tools.md) for more information on which option best fits your situation. +When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](../../manage-data/ingest/tools.md) for more information on which option best fits your situation. ### {{agent}} [observability-log-monitoring-agent] @@ -68,11 +68,11 @@ See [install {{agent}} in containers](asciidocalypse://docs/docs-content/docs/re The following resources provide information on configuring your logs: -* [Data streams](../../../manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size. -* [Data views](../../../explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces. -* [Index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements. -* [Ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing. -* [Mapping](../../../manage-data/data-store/mapping.md): Define how data is stored and indexed. +* [Data streams](../../manage-data/data-store/data-streams.md): Efficiently store append-only time series data in multiple backing indices partitioned by time and size. +* [Data views](../../explore-analyze/find-and-organize/data-views.md): Query log entries from the data streams of specific datasets or namespaces. +* [Index lifecycle management](../../manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md): Configure the built-in logs policy based on your application’s performance, resilience, and retention requirements. +* [Ingest pipeline](../../manage-data/ingest/transform-enrich/ingest-pipelines.md): Parse and transform log entries into a suitable format before indexing. +* [Mapping](../../manage-data/data-store/mapping.md): Define how data is stored and indexed. ## View and monitor logs [observability-log-monitoring-view-and-monitor-logs] @@ -81,30 +81,30 @@ Use **Logs Explorer** to search, filter, and tail all your logs ingested into yo The following resources provide information on viewing and monitoring your logs: -* [Discover and explore](../../../solutions/observability/logs/logs-explorer.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. -* [Detect log anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically. +* [Discover and explore](../../solutions/observability/logs/logs-explorer.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. +* [Detect log anomalies](../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically. ## Monitor data sets [observability-log-monitoring-monitor-data-sets] The **Data Set Quality** page provides an overview of your data sets and their quality. Use this information to get an idea of your overall data set quality, and find data sets that contain incorrectly parsed documents. -[Monitor data sets](../../../solutions/observability/data-set-quality-monitoring.md) +[Monitor data sets](../../solutions/observability/data-set-quality-monitoring.md) ## Application logs [observability-log-monitoring-application-logs] -Application logs provide valuable insight into events that have occurred within your services and applications. See [Application logs](../../../solutions/observability/logs/stream-application-logs.md). +Application logs provide valuable insight into events that have occurred within your services and applications. See [Application logs](../../solutions/observability/logs/stream-application-logs.md). ## Log threshold alert [logs-alerts-checklist] You can create a rule to send an alert when the log aggregation exceeds a threshold. -Refer to [Log threshold](../../../solutions/observability/incident-management/create-log-threshold-rule.md). +Refer to [Log threshold](../../solutions/observability/incident-management/create-log-threshold-rule.md). ## Default logs template [logs-template-checklist] Configure the default `logs` template using the `logs@custom` component template. -Refer to the [Logs index template reference](../../../solutions/observability/logs/logs-index-template-reference.md). \ No newline at end of file +Refer to the [Logs index template reference](../../solutions/observability/logs/logs-index-template-reference.md). \ No newline at end of file From f6a543880829a788ef9303b2afaf3a3ed235b060 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 17:39:08 -0600 Subject: [PATCH 22/23] fix links --- solutions/observability/data-set-quality-monitoring.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/solutions/observability/data-set-quality-monitoring.md b/solutions/observability/data-set-quality-monitoring.md index b9581cb7db..6836924b6b 100644 --- a/solutions/observability/data-set-quality-monitoring.md +++ b/solutions/observability/data-set-quality-monitoring.md @@ -18,7 +18,7 @@ To open **Data Set Quality**, find **Stack Management** in the main menu or use ::::{admonition} Requirements :class: note -Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index. +Users with the `viewer` role can view the Data Sets Quality summary. To view the Active Data Sets and Estimated Data summaries, users need the `monitor` [index privilege](../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-indices) for the `logs-*-*` index. :::: @@ -56,7 +56,7 @@ To use Logs Explorer or Discover to find ignored fields in individual logs: 1. Find data sets with degraded documents using the **Degraded Docs** column of the data sets table. 2. Click the percentage in the **Degraded Docs** column to open the data set in Logs Explorer or Discover. -The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon (![degraded document icon](../../../images/serverless-indexClose.svg "")). +The **Documents** table in Logs Explorer or Discover is automatically filtered to show documents that were not parsed correctly. Under the **actions** column, you’ll find the degraded document icon (![degraded document icon](../../images/serverless-indexClose.svg "")). Now that you know which documents contain ignored fields, examine them more closely to find the origin of the issue: From d4b429a7243b16ae1a3c24da017dd563fe3bbeb7 Mon Sep 17 00:00:00 2001 From: Mike Birnstiehl Date: Thu, 20 Feb 2025 17:44:38 -0600 Subject: [PATCH 23/23] add traces --- .../serverless/observability-apm-traces.md | 30 ---------------- .../observability/apm-traces.md | 34 ------------------ raw-migrated-files/toc.yml | 2 -- solutions/observability/apps/traces-2.md | 36 ++++++++++++++++--- 4 files changed, 31 insertions(+), 71 deletions(-) delete mode 100644 raw-migrated-files/docs-content/serverless/observability-apm-traces.md delete mode 100644 raw-migrated-files/observability-docs/observability/apm-traces.md diff --git a/raw-migrated-files/docs-content/serverless/observability-apm-traces.md b/raw-migrated-files/docs-content/serverless/observability-apm-traces.md deleted file mode 100644 index d62889a4f1..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-apm-traces.md +++ /dev/null @@ -1,30 +0,0 @@ -# Traces [observability-apm-traces] - -::::{tip} -Traces link together related transactions to show an end-to-end performance of how a request was served and which services were part of it. In addition to the Traces overview, you can view your application traces in the [trace sample timeline waterfall](../../../solutions/observability/apps/trace-sample-timeline.md). - -:::: - - -**Traces** displays your application’s entry (root) transactions. Transactions with the same name are grouped together and only shown once in this table. If you’re using [distributed tracing](../../../solutions/observability/apps/trace-sample-timeline.md), this view is key to finding the critical paths within your application. - -By default, transactions are sorted by *Impact*. Impact helps show the most used and slowest endpoints in your service — in other words, it’s the collective amount of pain a specific endpoint is causing your users. If there’s a particular endpoint you’re worried about, select it to view its [transaction details](../../../solutions/observability/apps/transactions-2.md#transaction-details). - -You can also use queries to filter and search the transactions shown on this page. Note that only properties available on root transactions are searchable. For example, you can’t search for `label.tier: 'high'`, as that field is only available on non-root transactions. - -:::{image} ../../../images/serverless-apm-traces.png -:alt: Example view of the Traces overview in the Applications UI -:class: screenshot -::: - - -## Trace explorer [observability-apm-traces-trace-explorer] - -**Trace explorer** is an experimental top-level search tool that allows you to query your traces using [Kibana Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) or [Event Query Language (EQL)](../../../explore-analyze/query-filter/languages/eql.md). - -Curate your own custom queries, or use the [Service map](../../../solutions/observability/apps/service-map.md) to find and select edges to automatically generate queries based on your selection: - -:::{image} ../../../images/serverless-trace-explorer.png -:alt: Trace explorer -:class: screenshot -::: diff --git a/raw-migrated-files/observability-docs/observability/apm-traces.md b/raw-migrated-files/observability-docs/observability/apm-traces.md deleted file mode 100644 index 1aa1a2e2d5..0000000000 --- a/raw-migrated-files/observability-docs/observability/apm-traces.md +++ /dev/null @@ -1,34 +0,0 @@ -# Traces [apm-traces] - -::::{tip} -Traces link together related transactions to show an end-to-end performance of how a request was served and which services were part of it. In addition to the Traces overview, you can view your application traces in the [trace sample timeline waterfall](../../../solutions/observability/apps/trace-sample-timeline.md). -:::: - - -**Traces** displays your application’s entry (root) transactions. Transactions with the same name are grouped together and only shown once in this table. If you’re using [distributed tracing](../../../solutions/observability/apps/trace-sample-timeline.md#distributed-tracing), this view is key to finding the critical paths within your application. - -By default, transactions are sorted by *Impact*. Impact helps show the most used and slowest endpoints in your service — in other words, it’s the collective amount of pain a specific endpoint is causing your users. If there’s a particular endpoint you’re worried about, select it to view its [transaction details](../../../solutions/observability/apps/transactions-2.md#transaction-details). - -You can also use queries to filter and search the transactions shown on this page. Note that only properties available on root transactions are searchable. For example, you can’t search for `label.tier: 'high'`, as that field is only available on non-root transactions. - -:::{image} ../../../images/observability-apm-traces.png -:alt: Example view of the Traces overview in Applications UI in Kibana -:class: screenshot -::: - - -## Trace explorer [trace-explorer] - -::::{warning} -This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. -:::: - - -**Trace explorer** is an experimental top-level search tool that allows you to query your traces using [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) or [Event Query Language (EQL)](../../../explore-analyze/query-filter/languages/eql.md). - -Curate your own custom queries, or use the [**Service Map**](../../../solutions/observability/apps/service-map.md) to find and select edges to automatically generate queries based on your selection: - -:::{image} ../../../images/observability-trace-explorer.png -:alt: Trace explorer -:class: screenshot -::: diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 644830e616..5f8477b7ba 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -217,7 +217,6 @@ toc: - file: docs-content/serverless/observability-apm-act-on-data.md - file: docs-content/serverless/observability-apm-agents-elastic-apm-agents.md - file: docs-content/serverless/observability-apm-get-started.md - - file: docs-content/serverless/observability-apm-traces.md - file: docs-content/serverless/observability-ecs-application-logs.md - file: docs-content/serverless/observability-plaintext-application-logs.md - file: docs-content/serverless/observability-stream-log-files.md @@ -445,7 +444,6 @@ toc: - file: observability-docs/observability/apm-act-on-data.md - file: observability-docs/observability/apm-agents.md - file: observability-docs/observability/apm-getting-started-apm-server.md - - file: observability-docs/observability/apm-traces.md - file: observability-docs/observability/index.md - file: observability-docs/observability/obs-ai-assistant.md - file: security-docs/security/index.md diff --git a/solutions/observability/apps/traces-2.md b/solutions/observability/apps/traces-2.md index 6bc74913c1..322cb17feb 100644 --- a/solutions/observability/apps/traces-2.md +++ b/solutions/observability/apps/traces-2.md @@ -4,11 +4,37 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/observability-apm-traces.html --- -# Traces +# Traces [apm-traces] -% What needs to be done: Align serverless/stateful +::::{tip} +Traces link together related transactions to show an end-to-end performance of how a request was served and which services were part of it. In addition to the Traces overview, you can view your application traces in the [trace sample timeline waterfall](../../../solutions/observability/apps/trace-sample-timeline.md). +:::: -% Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/observability-docs/observability/apm-traces.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-apm-traces.md \ No newline at end of file +**Traces** displays your application’s entry (root) transactions. Transactions with the same name are grouped together and only shown once in this table. If you’re using [distributed tracing](../../../solutions/observability/apps/trace-sample-timeline.md#distributed-tracing), this view is key to finding the critical paths within your application. + +By default, transactions are sorted by *Impact*. Impact helps show the most used and slowest endpoints in your service — in other words, it’s the collective amount of pain a specific endpoint is causing your users. If there’s a particular endpoint you’re worried about, select it to view its [transaction details](../../../solutions/observability/apps/transactions-2.md#transaction-details). + +You can also use queries to filter and search the transactions shown on this page. Note that only properties available on root transactions are searchable. For example, you can’t search for `label.tier: 'high'`, as that field is only available on non-root transactions. + +:::{image} ../../../images/observability-apm-traces.png +:alt: Example view of the Traces overview in Applications UI in Kibana +:class: screenshot +::: + + +## Trace explorer [trace-explorer] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +**Trace explorer** is an experimental top-level search tool that allows you to query your traces using [{{kib}} Query Language (KQL)](../../../explore-analyze/query-filter/languages/kql.md) or [Event Query Language (EQL)](../../../explore-analyze/query-filter/languages/eql.md). + +Curate your own custom queries, or use the [**Service Map**](../../../solutions/observability/apps/service-map.md) to find and select edges to automatically generate queries based on your selection: + +:::{image} ../../../images/observability-trace-explorer.png +:alt: Trace explorer +:class: screenshot +::: \ No newline at end of file