From 9345aebb8b11053624a20dafe27099c7cd414cc0 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri Benedetti Date: Wed, 27 Aug 2025 11:03:02 +0200 Subject: [PATCH 1/7] Get started edit --- .../logs/get-started-with-system-logs.md | 148 ++++++++++++++++-- 1 file changed, 138 insertions(+), 10 deletions(-) diff --git a/solutions/observability/logs/get-started-with-system-logs.md b/solutions/observability/logs/get-started-with-system-logs.md index 91e003b8f2..7b0254f0e9 100644 --- a/solutions/observability/logs/get-started-with-system-logs.md +++ b/solutions/observability/logs/get-started-with-system-logs.md @@ -10,33 +10,161 @@ products: # Get started with system logs [observability-get-started-with-logs] -::::{note} +In this guide you can learn how to onboard system log data from a machine or server, then explore the data in **Discover**. -**For Observability Serverless projects**, the **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles). +## Prerequisites [logs-prereqs] + +::::{tab-set} +:group: stack-serverless + +:::{tab-item} Elastic Stack +:sync: stack + +To follow the steps in this guide, you need an {{stack}} deployment that includes: + +* {{es}} for storing and searching data +* {{kib}} for visualizing and managing data +* Kibana user with `All` privileges on {{fleet}} and Integrations. Because many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces. + +To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). + +::: + +:::{tab-item} Serverless +:sync: serverless + +The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles). + +::: :::: +## Onboard system log data [onboard-system-log-data] + +Follow these steps to onboard system log data. + +::::::{stepper} + +:::::{step} Open your project + +Open an [{{obs-serverless}} project](/solutions/observability/get-started.md) or Elastic Stack deployment. + +::::: + +:::::{step} Select data collection method -In this guide you’ll learn how to onboard system log data from a machine or server, then observe the data in **Discover**. +From the Observability UI, go to **Add data**. Under **What do you want to monitor?**, select **Host**, then select one of these options: -To onboard system log data: +::::{tab-set} +:::{tab-item} OpenTelemetry: Full Observability -1. Open an [{{obs-serverless}} project](/solutions/observability/get-started.md) or Elastic Stack deployment. -2. From the Observability UI, go to **Add data**. -3. Under **What do you want to monitor?**, select **Host** → **Elastic Agent: Logs & Metrics**. -4. Follow the in-product steps to auto-detect your logs and install and configure the {{agent}}. +Collect native OpenTelemetry metrics and logs using the Elastic Distribution of OpenTelemetry Collector (EDOT). + +**Recommended for**: Users who want to collect native OpenTelemetry data or are already using OpenTelemetry in their environment. + +::: + +:::{tab-item} Elastic Agent: Logs & Metrics + +Bring data from Elastic integrations using the Elastic Agent. + +**Recommended for**: Users who want to leverage Elastic's pre-built integrations and centralized management through Fleet. + +::: + +:::: +::::: + +:::::{step} Follow setup instructions + +Follow the in-product steps to auto-detect your logs and install and configure your chosen data collector. + +::::: + +:::::{step} Verify data collection After the agent is installed and successfully streaming log data, you can view the data in the UI: 1. From the navigation menu, go to **Discover**. -1. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data. +2. Select **All logs** from the **Data views** menu. The view shows all log datasets. Notice you can add fields, change the view, expand a document to see details, and perform other actions to explore your data. + +::::: + +:::::{step} Explore and analyze your data +Now that you have logs flowing into Elasticsearch, you can start exploring and analyzing your data: + +* **[Explore logs in Discover](/solutions/observability/logs/explore-logs.md)**: Search, filter, and tail all your logs from a central location +* **[Parse and route logs](/solutions/observability/logs/parse-route-logs.md)**: Extract structured fields from unstructured logs and route them to specific data streams +* **[Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md)**: Filter logs by specific criteria and aggregate data to find patterns and gain insights + +::::: + +:::::: + +## Other ways to collect log data [other-data-collection-methods] + +While the Elastic Agent and OpenTelemetry Collector are the recommended approaches for most users, Elastic provides additional tools for specific use cases: + +::::{tab-set} + +:::{tab-item} Filebeat + +Filebeat is a lightweight data shipper that sends log data to Elasticsearch. It's ideal for: + +* Simple log collection: When you need to collect logs from specific files or directories. +* Custom parsing: When you need to parse logs using ingest pipelines before indexing. +* Legacy systems: When you can't install the Elastic Agent or OpenTelemetry Collector. + +For more information, refer to [Collecting log data with Filebeat](/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md) and [Ingest logs from applications using Filebeat](/solutions/observability/logs/plaintext-application-logs.md). + +::: + +:::{tab-item} Winlogbeat + +Winlogbeat is specifically designed for collecting Windows event logs. It's ideal for: + +* Windows environments: When you need to collect Windows security, application, and system event logs. +* Security monitoring: When you need detailed Windows security event data. +* Compliance requirements: When you need to capture specific Windows event IDs. + +For more information, refer to the [Winlogbeat documentation](beats://reference/winlogbeat/index.md). + +::: + +:::{tab-item} Logstash + +Logstash is a powerful data processing pipeline that can collect, transform, and enrich log data before sending it to Elasticsearch. It's ideal for: + +* Complex data processing: When you need to parse, filter, and transform logs before indexing. +* Multiple data sources: When you need to collect logs from various sources and normalize them. +* Advanced use cases: When you need data enrichment, aggregation, or routing to multiple destinations. +* Extending Elastic integrations: When you want to add custom processing to data collected by Elastic Agent or Beats. + +For more information, refer to [Logstash](logstash://reference/index.md) and [Using Logstash with Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md). + +::: + +:::{tab-item} REST APIs + +You can use Elasticsearch REST APIs to send log data directly to Elasticsearch. This approach is ideal for: + +* Custom applications: When you want to send logs directly from your application code. +* Programmatic collection: When you need to collect logs using custom scripts or tools. +* Real-time streaming: When you need to send logs as they're generated. + +For more information, refer to [Elasticsearch REST APIs](elasticsearch://reference/elasticsearch/rest-apis/index.md). + +::: + +:::: ## Next steps [observability-get-started-with-logs-next-steps] -Now that you’ve added logs and explored your data, learn how to onboard other types of data: +Now that you've added logs and explored your data, learn how to onboard other types of data: * [Stream any log file](stream-any-log-file.md) +* [Stream application logs](stream-application-logs.md) * [Get started with traces and APM](/solutions/observability/apm/get-started.md) To onboard other types of data, select **Add Data** from the main menu. From f1ff36d0477f8cd2da7e51fcf5c5590e96ee952a Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri Benedetti Date: Wed, 27 Aug 2025 19:09:53 +0200 Subject: [PATCH 2/7] Edit logs onboarding --- manage-data/ingest.md | 4 +- manage-data/ingest/tools.md | 9 ++-- solutions/observability/logs.md | 44 +++++++++++++++---- solutions/observability/logs/discover-logs.md | 2 +- .../observability/logs/stream-any-log-file.md | 8 ++-- .../logs/stream-application-logs.md | 37 +++++++++++++--- 6 files changed, 79 insertions(+), 25 deletions(-) diff --git a/manage-data/ingest.md b/manage-data/ingest.md index 0a6eb391f5..b30c682608 100644 --- a/manage-data/ingest.md +++ b/manage-data/ingest.md @@ -16,9 +16,9 @@ products: - id: elasticsearch --- -# Ingestion +# Bring your data to Elastic -Bring your data! Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights. +Whether you call it *adding*, *indexing*, or *ingesting* data, you have to get the data into {{es}} before you can search it, visualize it, and use it for insights. Our ingest tools are flexible, and support a wide range of scenarios. We can help you with everything from popular and straightforward use cases, all the way to advanced use cases that require additional processing in order to modify or reshape your data before it goes to {{es}}. diff --git a/manage-data/ingest/tools.md b/manage-data/ingest/tools.md index 88013df8be..fc1656f252 100644 --- a/manage-data/ingest/tools.md +++ b/manage-data/ingest/tools.md @@ -40,7 +40,9 @@ $$$supported-outputs-beats-and-agent$$$ $$$additional-capabilities-beats-and-agent$$$ -Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case. +Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. + +Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.
@@ -49,7 +51,7 @@ Depending on the type of data you want to ingest, you have a number of methods a | Integrations | Ingest data using a variety of Elastic integrations. | [Elastic Integrations](integration-docs://reference/index.md) | | File upload | Upload data from a file and inspect it before importing it into {{es}}. | [Upload data files](/manage-data/ingest/upload-data-files.md) | | APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. | [Document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) | -| OpenTelemetry | Collect and send your telemetry data to Elastic Observability | [Elastic Distributions of OpenTelemetry](opentelemetry://reference/index.md) | +| OpenTelemetry | Collect and send your telemetry data to Elastic Observability | [Elastic Distributions of OpenTelemetry](opentelemetry://reference/index.md). | | Fleet and Elastic Agent | Add monitoring for logs, metrics, and other types of data to a host using Elastic Agent, and centrally manage it using Fleet. | [Fleet and {{agent}} overview](/reference/fleet/index.md)
[{{fleet}} and {{agent}} restrictions (Serverless)](/reference/fleet/fleet-agent-serverless-restrictions.md)
[{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)|| | {{elastic-defend}} | {{elastic-defend}} provides organizations with prevention, detection, and response capabilities with deep visibility for EPP, EDR, SIEM, and Security Analytics use cases across Windows, macOS, and Linux operating systems running on both traditional endpoints and public cloud environments. | [Configure endpoint protection with {{elastic-defend}}](/solutions/security/configure-elastic-defend.md) | | {{ls}} | Dynamically unify data from a wide variety of data sources and normalize it into destinations of your choice with {{ls}}. | [Logstash](logstash://reference/index.md) | @@ -57,6 +59,5 @@ Depending on the type of data you want to ingest, you have a number of methods a | APM | Collect detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | [Application performance monitoring (APM)](/solutions/observability/apm/index.md) | | Application logs | Ingest application logs using Filebeat, {{agent}}, or the APM agent, or reformat application logs into Elastic Common Schema (ECS) logs and then ingest them using Filebeat or {{agent}}. | [Stream application logs](/solutions/observability/logs/stream-application-logs.md)
[ECS formatted application logs](/solutions/observability/logs/ecs-formatted-application-logs.md) | | Elastic Serverless forwarder for AWS | Ship logs from your AWS environment to cloud-hosted, self-managed Elastic environments, or {{ls}}. | [Elastic Serverless Forwarder](elastic-serverless-forwarder://reference/index.md) | -| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors -](elasticsearch://reference/search-connectors/index.md)
[Connector clients](elasticsearch://reference/search-connectors/index.md) | +| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors](elasticsearch://reference/search-connectors/index.md)
[Connector clients](elasticsearch://reference/search-connectors/index.md) | | Web crawler | Discover, extract, and index searchable content from websites and knowledge bases using the web crawler. | [Elastic Open Web Crawler](https://github.com/elastic/crawler#readme) | diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md index 8e8e8c4720..3358631284 100644 --- a/solutions/observability/logs.md +++ b/solutions/observability/logs.md @@ -20,18 +20,29 @@ Elastic Observability allows you to deploy and manage logs at a petabyte scale, * [Run pattern analysis on log data](/solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. * [Troubleshoot logs](/troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. - ## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project] -You can send logs data to your project in different ways depending on your needs: +You can send logs data to your project in different ways depending on your needs. When choosing between these options, consider the different features and functionalities between them. + +Refer to [Ingest tools overview](/manage-data/ingest/tools.md) for more information on which option best fits your situation. + + +::::{tab-set} + +:::{tab-item} {{edot}} + +The Elastic Distribution of OpenTelemetry (EDOT) Collector and SDKs provide native OpenTelemetry support for collecting logs, metrics, and traces. This approach is ideal for: -* {{agent}} -* {{filebeat}} +* Native OpenTelemetry: When you want to use OpenTelemetry standards and are already using OpenTelemetry in your environment. +* Full observability: When you need to collect logs, metrics, and traces from a single collector. +* Modern applications: When building new applications with OpenTelemetry instrumentation. +* Standards compliance: When you need to follow OpenTelemetry specifications. -When choosing between {{agent}} and {{filebeat}}, consider the different features and functionalities between the two options. See [{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md) for more information on which option best fits your situation. +For more information, refer to [Elastic Distribution of OpenTelemetry](opentelemetry://reference/index.md). +::: -### {{agent}} [observability-log-monitoring-agent] +:::{tab-item} {{agent}} {{agent}} uses [integrations](https://www.elastic.co/integrations/data-integrations) to ingest logs from Kubernetes, MySQL, and many more data sources. You have the following options when installing and managing an {{agent}}: @@ -45,7 +56,7 @@ See [install {{fleet}}-managed {{agent}}](/reference/fleet/install-fleet-managed #### Standalone {{agent}} [observability-log-monitoring-standalone-agent] -Install an {{agent}} and manually configure it locally on the system where it’s installed. You are responsible for managing and upgrading the agents. +Install an {{agent}} and manually configure it locally on the system where it's installed. You are responsible for managing and upgrading the agents. See [install standalone {{agent}}](/reference/fleet/install-standalone-elastic-agent.md). @@ -56,8 +67,9 @@ Run an {{agent}} inside of a container — either with {{fleet-server}} or stand See [install {{agent}} in containers](/reference/fleet/install-elastic-agents-in-containers.md). +::: -### {{filebeat}} [observability-log-monitoring-filebeat] +:::{tab-item} {{filebeat}} {{filebeat}} is a lightweight shipper for forwarding and centralizing log data. Installed as a service on your servers, {{filebeat}} monitors the log files or locations that you specify, collects log events, and forwards them to your Observability project for indexing. @@ -65,6 +77,22 @@ See [install {{agent}} in containers](/reference/fleet/install-elastic-agents-in * [{{filebeat}} quick start](beats://reference/filebeat/filebeat-installation-configuration.md): Basic installation instructions to get you started. * [Set up and run {{filebeat}}](beats://reference/filebeat/setting-up-running.md): Information on how to install, set up, and run {{filebeat}}. +::: + +:::{tab-item} {{ls}} + +{{ls}} is a powerful data processing pipeline that can collect, transform, and enrich log data before sending it to Elasticsearch. It's ideal for: + +* Complex data processing: When you need to parse, filter, and transform logs before indexing. +* Multiple data sources: When you need to collect logs from various sources and normalize them. +* Advanced use cases: When you need data enrichment, aggregation, or routing to multiple destinations. +* Extending Elastic integrations: When you want to add custom processing to data collected by Elastic Agent or Beats. + +For more information, refer to [Logstash](logstash://reference/index.md) and [Using Logstash with Elastic integrations](logstash://reference/using-logstash-with-elastic-integrations.md). + +::: + +:::: ## Configure logs [observability-log-monitoring-configure-logs] diff --git a/solutions/observability/logs/discover-logs.md b/solutions/observability/logs/discover-logs.md index 4045fb9776..d4cb8d557d 100644 --- a/solutions/observability/logs/discover-logs.md +++ b/solutions/observability/logs/discover-logs.md @@ -22,7 +22,7 @@ For a contextual logs experience, set the **Solution view** for your space to ** :::{image} ../../images/observability-log-explorer.png :alt: Screen capture of Discover -:class: screenshot +:screenshot: ::: ## Required {{kib}} privileges [logs-explorer-privileges] diff --git a/solutions/observability/logs/stream-any-log-file.md b/solutions/observability/logs/stream-any-log-file.md index 6e8a63445b..6548d09043 100644 --- a/solutions/observability/logs/stream-any-log-file.md +++ b/solutions/observability/logs/stream-any-log-file.md @@ -10,7 +10,7 @@ products: - id: cloud-serverless --- -# Stream any log file [logs-stream] +# Stream any log file using {{agent}} [logs-stream] This guide shows you how to manually configure a standalone {{agent}} to send your log data to {{es}} using the `elastic-agent.yml` file. @@ -97,7 +97,7 @@ Expand-Archive .\elastic-agent-{{version.stack}}-windows-x86_64.zip ::::::{tab-item} DEB :::{tip} -To simplify upgrading to future versions of Elastic Agent, we recommended that you use the tarball distribution instead of the RPM distribution. +To simplify upgrading to future versions of Elastic Agent, use the tarball distribution instead of the RPM distribution. You can install Elastic Agent in an unprivileged mode that does not require root privileges. ::: @@ -110,7 +110,7 @@ sudo dpkg -i elastic-agent-{{version.stack}}-amd64.deb ::::::{tab-item} RPM :::{tip} -To simplify upgrading to future versions of Elastic Agent, we recommended that you use the tarball distribution instead of the RPM distribution. +To simplify upgrading to future versions of Elastic Agent, use the tarball distribution instead of the RPM distribution. You can install Elastic Agent in an unprivileged mode that does not require root privileges. ::: @@ -124,7 +124,7 @@ sudo rpm -vi elastic-agent-{{version.stack}}-x86_64.rpm ### Step 2: Install and start the {{agent}} [logs-stream-install-agent] -After downloading and extracting the installation package, you’re ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: +After downloading and extracting the installation package, you're ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: ::::{note} On macOS, Linux (tar package), and Windows, run the `install` command to install and start {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd. For these systems, you must enable and start the service. diff --git a/solutions/observability/logs/stream-application-logs.md b/solutions/observability/logs/stream-application-logs.md index fa7a2e480b..2a640ebb2d 100644 --- a/solutions/observability/logs/stream-application-logs.md +++ b/solutions/observability/logs/stream-application-logs.md @@ -17,7 +17,7 @@ Application logs provide valuable insight into events that have occurred within The format of your logs (structured or plaintext) influences your log ingestion strategy. -## Plaintext logs vs. structured Elastic Common Schema (ECS) logs [observability-correlate-application-logs-plaintext-logs-vs-structured-elastic-common-schema-ecs-logs] +## Plaintext logs versus structured Elastic Common Schema (ECS) logs [observability-correlate-application-logs-plaintext-logs-vs-structured-elastic-common-schema-ecs-logs] Logs are typically produced as either plaintext or structured. Plaintext logs contain only text and have no special formatting, for example: @@ -27,7 +27,7 @@ Logs are typically produced as either plaintext or structured. Plaintext logs co 2019-08-06T14:08:40.199Z DEBUG:spring-petclinic: init find form, org.springframework.samples.petclinic.owner.OwnerController ``` -Structured logs follow a predefined, repeatable pattern or structure. This structure is applied at write time — preventing the need for parsing at ingest time. The Elastic Common Schema (ECS) defines a common set of fields to use when structuring logs. This structure allows logs to be easily ingested, and provides the ability to correlate, search, and aggregate on individual fields within your logs. +Structured logs follow a predefined, repeatable pattern or structure. This structure is applied at write time, preventing the need for parsing at ingest time. The Elastic Common Schema (ECS) defines a common set of fields to use when structuring logs. This structure allows logs to be ingested, and provides the ability to correlate, search, and aggregate on individual fields within your logs. For example, the previous example logs might look like this when structured with ECS-compatible JSON: @@ -92,15 +92,40 @@ Log sending is supported in the Java {{apm-agent}}. Correlate your application logs with trace events to: -* view the context of a log and the parameters provided by a user -* view all logs belonging to a particular trace -* easily move between logs and traces when debugging application issues +* See the context of a log and the parameters provided by a user +* See all logs belonging to a particular trace +* Move between logs and traces when debugging application issues Learn more about log correlation in the agent-specific ingestion guides: +::::{tab-set} + +:::{tab-item} OpenTelemetry (EDOT) + +The {{edot}} (EDOT) provides SDKs for multiple programming languages with built-in support for log correlation: + +* [Java](opentelemetry://reference/edot-sdks/java/index.md) +* [.NET](opentelemetry://reference/edot-sdks/dotnet/index.md) +* [Node.js](opentelemetry://reference/edot-sdks/nodejs/index.md) +* [PHP](opentelemetry://reference/edot-sdks/php/index.md) +* [Python](opentelemetry://reference/edot-sdks/python/index.md) + +For more information about EDOT, refer to [Elastic Distribution of OpenTelemetry (EDOT)](opentelemetry://reference/index.md). + +::: + +:::{tab-item} APM Agents +:name: apm-agents + +Elastic APM agents provide log correlation capabilities for the following languages: + * [Go](apm-agent-go://reference/logs.md) * [Java](apm-agent-java://reference/logs.md#log-correlation-ids) * [.NET](apm-agent-dotnet://reference/logs.md) * [Node.js](apm-agent-nodejs://reference/logs.md) * [Python](apm-agent-python://reference/logs.md#log-correlation-ids) -* [Ruby](apm-agent-ruby://reference/logs.md) \ No newline at end of file +* [Ruby](apm-agent-ruby://reference/logs.md) + +::: + +:::: \ No newline at end of file From 579d9dfa048082228f8c04a74ba2e2b7124d56e8 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri Benedetti Date: Wed, 27 Aug 2025 19:54:17 +0200 Subject: [PATCH 3/7] Add EDOT tutorial --- solutions/observability/get-started.md | 4 +- ...tream-any-log-file-using-edot-collector.md | 374 ++++++++++++++++++ .../observability/logs/stream-any-log-file.md | 190 +++++---- solutions/toc.yml | 1 + 4 files changed, 470 insertions(+), 99 deletions(-) create mode 100644 solutions/observability/logs/stream-any-log-file-using-edot-collector.md diff --git a/solutions/observability/get-started.md b/solutions/observability/get-started.md index bf4bb11f77..1e3f732fd0 100644 --- a/solutions/observability/get-started.md +++ b/solutions/observability/get-started.md @@ -105,7 +105,7 @@ Elastic provides a powerful LLM observability framework including key metrics, l Refer to [LLM observability](/solutions/observability/applications/llm-observability.md) for more information. ::: - +:::: ::::: :::::: @@ -178,5 +178,5 @@ Many [{{observability}} integrations](https://www.elastic.co/integrations/data-i ### Other resources * [What's Elastic {{observability}}](/solutions/observability/get-started/what-is-elastic-observability.md) -* [What’s new in Elastic Stack](/release-notes/elastic-observability/index.md) +* [What's new in Elastic Stack](/release-notes/elastic-observability/index.md) * [{{obs-serverless}} billing dimensions](/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md) diff --git a/solutions/observability/logs/stream-any-log-file-using-edot-collector.md b/solutions/observability/logs/stream-any-log-file-using-edot-collector.md new file mode 100644 index 0000000000..83fec71534 --- /dev/null +++ b/solutions/observability/logs/stream-any-log-file-using-edot-collector.md @@ -0,0 +1,374 @@ +--- +navigation_title: Stream any log file using OTel Collector +mapped_pages: + - https://www.elastic.co/guide/en/observability/current/logs-stream-edot.html + - https://www.elastic.co/guide/en/serverless/current/observability-stream-log-files-edot.html +applies_to: + stack: all + serverless: all +products: + - id: observability + - id: cloud-serverless +--- + +# Stream any log file using OTel Collector [logs-stream-edot] + +This guide shows you how to manually configure the {{edot}} (EDOT) Collector to send your log data to {{es}} by configuring the `otel.yml` file. For an Elastic Agent equivalent, refer to [Stream any log file using {{agent}}](/solutions/observability/logs/stream-any-log-file.md). + +For more OpenTelemetry quickstarts, refer to [EDOT quickstarts](opentelemetry://reference/quickstart/index.md). + +## Prerequisites [logs-stream-edot-prereq] + +::::{tab-set} +:group: stack-serverless + +:::{tab-item} Elastic Stack +:sync: stack + +To follow the steps in this guide, you need an {{stack}} deployment that includes: + +* {{es}} for storing and searching data +* {{kib}} for visualizing and managing data +* Kibana user with `All` privileges on {{fleet}} and Integrations. Because many Integrations assets are shared across spaces, users need the Kibana privileges in all spaces. +* Integrations Server (included by default in every {{ech}} deployment) + +To get started quickly, create an {{ech}} deployment and host it on AWS, GCP, or Azure. [Try it out for free](https://cloud.elastic.co/registration?page=docs&placement=docs-body). + + +::: + +:::{tab-item} Serverless +:sync: serverless + +The **Admin** role or higher is required to onboard log data. To learn more, refer to [Assign user roles and privileges](/deploy-manage/users-roles/cloud-organization/user-roles.md#general-assign-user-roles). + +::: + +:::: + +## Install the EDOT Collector [logs-stream-edot-install-config] + +Complete these steps to install and configure the EDOT Collector and send your log data to Elastic Observability. + +::::::{stepper} + +:::::{step} Download and install the EDOT Collector + +On your host, download the EDOT Collector installation package that corresponds with your system: + +::::{tab-set} + +:::{tab-item} Linux + +```shell subs=true +curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-linux-x86_64.tar.gz +tar xzvf elastic-agent-{{version.stack}}-linux-x86_64.tar.gz +``` +::: + +:::{tab-item} macOS + +```shell subs=true +curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-darwin-x86_64.tar.gz +tar xzvf elastic-agent-{{version.stack}}-darwin-x86_64.tar.gz +``` +::: + +:::{tab-item} Windows + +```powershell subs=true +# PowerShell 5.0+ +wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-windows-x86_64.zip -OutFile elastic-agent-{{version.stack}}-windows-x86_64.zip +Expand-Archive .\elastic-agent-{{version.stack}}-windows-x86_64.zip +``` +::: + +:::: +::::: + +:::::{step} Configure the EDOT Collector + +Follow these steps to retrieve the managed OTLP endpoint URL for your Serverless project: + +1. In Elastic Cloud Serverless, open your Observability project. +2. Go to **Add data** → **Application** → **OpenTelemetry**. +3. Select **Managed OTLP Endpoint** in the second step. +4. Copy the OTLP endpoint configuration value. +5. Select **Create API Key** to generate an API key. + +Replace `` and `` before applying the following commands: + +::::{tab-set} + +:::{tab-item} Linux + +```bash +ELASTIC_OTLP_ENDPOINT= && \ +ELASTIC_API_KEY= && \ +cp ./otel_samples/managed_otlp/logs_metrics_traces.yml ./otel.yml && \ +mkdir -p ./data/otelcol && \ +sed -i "s#\${env:STORAGE_DIR}#${PWD}/data/otelcol#g" ./otel.yml && \ +sed -i "s#\${env:ELASTIC_OTLP_ENDPOINT}#${ELASTIC_OTLP_ENDPOINT}#g" ./otel.yml && \ +sed -i "s#\${env:ELASTIC_API_KEY}#${ELASTIC_API_KEY}#g" ./otel.yml +``` +::: + +:::{tab-item} macOS + +```bash +ELASTIC_OTLP_ENDPOINT= && \ +ELASTIC_API_KEY= && \ +cp ./otel_samples/managed_otlp/logs_metrics_traces.yml ./otel.yml && \ +mkdir -p ./data/otelcol && \ +sed -i '' "s#\${env:STORAGE_DIR}#${PWD}/data/otelcol#g" ./otel.yml && \ +sed -i '' "s#\${env:ELASTIC_OTLP_ENDPOINT}#${ELASTIC_OTLP_ENDPOINT}#g" ./otel.yml && \ +sed -i '' "s#\${env:ELASTIC_API_KEY}#${ELASTIC_API_KEY}#g" ./otel.yml +``` +::: + +:::{tab-item} Windows + +```powershell +Remove-Item -Path .\otel.yml -ErrorAction SilentlyContinue +Copy-Item .\otel_samples\managed_otlp\logs_metrics_traces.yml .\otel.yml +New-Item -ItemType Directory -Force -Path .\data\otelcol | Out-Null + +$content = Get-Content .\otel.yml +$content = $content -replace '\${env:STORAGE_DIR}', "$PWD\data\otelcol" +$content = $content -replace '\${env:ELASTIC_OTLP_ENDPOINT}', "" +$content = $content -replace '\${env:ELASTIC_API_KEY}', "" +$content | Set-Content .\otel.yml +``` +::: +:::: +::::: + +:::::{step} Configure log file collection + +To collect logs from specific log files, you need to modify the `otel.yml` configuration file. The configuration includes receivers, processors, and exporters that handle log data. + +::::{tab-set} +:group: stack-serverless + +:::{tab-item} Elastic Stack +:sync: stack + +Here's an example configuration for collecting log files with Elastic Stack: + +:::{dropdown} otel.yml for logs collection (Elastic Stack) + +```yaml +receivers: + # Receiver for platform specific log files + filelog/platformlogs: + include: [ /var/log/*.log ] + retry_on_failure: + enabled: true + start_at: end + storage: file_storage +# start_at: beginning + +extensions: + file_storage: + directory: ${env:STORAGE_DIR} + +processors: + resourcedetection: + detectors: ["system"] + system: + hostname_sources: ["os"] + resource_attributes: + host.name: + enabled: true + host.id: + enabled: false + host.arch: + enabled: true + host.ip: + enabled: true + host.mac: + enabled: true + host.cpu.vendor.id: + enabled: true + host.cpu.family: + enabled: true + host.cpu.model.id: + enabled: true + host.cpu.model.name: + enabled: true + host.cpu.stepping: + enabled: true + host.cpu.cache.l2.size: + enabled: true + os.description: + enabled: true + os.type: + enabled: true + +exporters: + # Exporter to print the first 5 logs/metrics and then every 1000th + debug: + verbosity: detailed + sampling_initial: 5 + sampling_thereafter: 1000 + + # Exporter to send logs and metrics to Elasticsearch + elasticsearch/otel: + endpoints: ["${env:ELASTIC_ENDPOINT}"] + api_key: ${env:ELASTIC_API_KEY} + mapping: + mode: otel + +service: + extensions: [file_storage] + pipelines: + logs/platformlogs: + receivers: [filelog/platformlogs] + processors: [resourcedetection] + exporters: [debug, elasticsearch/otel] +``` + +::: +::: + +:::{tab-item} Serverless +:sync: serverless + +Here's an example configuration for collecting log files with Elastic Cloud Serverless: + +:::{dropdown} otel.yml for logs collection (Serverless) + +```yaml +receivers: + # Receiver for platform specific log files + filelog/platformlogs: + include: [/var/log/*.log] + retry_on_failure: + enabled: true + start_at: end + storage: file_storage +# start_at: beginning + +extensions: + file_storage: + directory: ${env:STORAGE_DIR} + +processors: + resourcedetection: + detectors: ["system"] + system: + hostname_sources: ["os"] + resource_attributes: + host.name: + enabled: true + host.id: + enabled: false + host.arch: + enabled: true + host.ip: + enabled: true + host.mac: + enabled: true + host.cpu.vendor.id: + enabled: true + host.cpu.family: + enabled: true + host.cpu.model.id: + enabled: true + host.cpu.model.name: + enabled: true + host.cpu.stepping: + enabled: true + host.cpu.cache.l2.size: + enabled: true + os.description: + enabled: true + os.type: + enabled: true + +exporters: + # Exporter to print the first 5 logs/metrics and then every 1000th + debug: + verbosity: detailed + sampling_initial: 5 + sampling_thereafter: 1000 + + # Exporter to send logs and metrics to Elasticsearch Managed OTLP Input + otlp/ingest: + endpoint: ${env:ELASTIC_OTLP_ENDPOINT} + headers: + Authorization: ApiKey ${env:ELASTIC_API_KEY} + +service: + extensions: [file_storage] + pipelines: + logs/platformlogs: + receivers: [filelog/platformlogs] + processors: [resourcedetection] + exporters: [debug, otlp/ingest] +``` +::: +::: +:::: + +Key configuration elements: + +* `receivers.filelog/platformlogs.include`: Specifies the path to your log files. You can use patterns like `/var/log/*.log`. +* `processors.resourcedetection`: Automatically detects and adds host system information to your logs. +* `extensions.file_storage`: Provides persistent storage for the collector's state. +* `exporters`: Configures how data is sent to Elasticsearch (Elastic Stack) or OTLP endpoint (Serverless). +::::: + +:::::{step} Run the EDOT Collector + +Run the following command to run the EDOT Collector: + +::::{tab-set} + +:::{tab-item} Linux and macOS + +```bash +sudo ./otelcol --config otel.yml +``` +::: + +:::{tab-item} Windows + +```powershell +.\elastic-agent.exe otel --config otel.yml +``` +::: + +:::: + +:::{note} +The Collector opens ports `4317` and `4318` to receive application data from locally running OTel SDKs without authentication. This allows the SDKs to send data without any further configuration needed as they use this endpoint by default. +::: +::::: +:::::: + +## Troubleshoot your EDOT Collector configuration [logs-stream-edot-troubleshooting] + +If you're not seeing your log files in the UI, verify the following: + +* The path to your logs file under `include` is correct. +* Your API key is properly set in the environment variables. +* The OTLP endpoint URL is correct and accessible. +* The Collector is running without errors (check the console output). + +If you're still running into issues, see [EDOT Collector troubleshooting](/troubleshoot/ingest/opentelemetry/edot-collector/index.md) and [Configure EDOT Collector](opentelemetry://reference/edot-collector/config/index.md). + +## Next steps [logs-stream-edot-next-steps] + +After you have your EDOT Collector configured and are streaming log data to {{es}}: + +* Refer to the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. +* Refer to the [Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. +* To collect telemetry from applications and use the EDOT Collector as a gateway, instrument your target applications following the setup instructions: + - [Android](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/android/) + - [.NET](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/dotnet/setup/) + - [iOS](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/ios/) + - [Java](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/java/setup/) + - [Node.js](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/nodejs/setup/) + - [PHP](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/php/setup/) + - [Python](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/python/setup/) \ No newline at end of file diff --git a/solutions/observability/logs/stream-any-log-file.md b/solutions/observability/logs/stream-any-log-file.md index 6548d09043..761712e8ce 100644 --- a/solutions/observability/logs/stream-any-log-file.md +++ b/solutions/observability/logs/stream-any-log-file.md @@ -12,13 +12,12 @@ products: # Stream any log file using {{agent}} [logs-stream] -This guide shows you how to manually configure a standalone {{agent}} to send your log data to {{es}} using the `elastic-agent.yml` file. +This guide shows you how to manually configure a standalone {{agent}} to send your log data to {{es}} using the `elastic-agent.yml` file. For an {{edot}} (EDOT) Collector equivalent, refer to [Stream any log file using OTel Collector](/solutions/observability/logs/stream-any-log-file-using-edot-collector.md). To get started quickly without manually configuring the {{agent}}, you can use the **Monitor hosts with {{agent}}** quickstart. Refer to the [quickstart documentation](/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md) for more information. Continue with this guide for instructions on manual configuration. - ## Prerequisites [logs-stream-prereq] ::::{tab-set} @@ -56,33 +55,30 @@ Complete these steps to install and configure the standalone {{agent}} and send 2. [Install and start the {{agent}}.](/solutions/observability/logs/stream-any-log-file.md#logs-stream-install-agent) 3. [Configure the {{agent}}.](/solutions/observability/logs/stream-any-log-file.md#logs-stream-agent-config) - ### Step 1: Download and extract the {{agent}} installation package [logs-stream-extract-agent] On your host, download and extract the installation package that corresponds with your system: -% Stateful and Serverless Need to fix these tabs. - -:::::::{tab-set} +:::::{tab-set} -::::::{tab-item} macOS +::::{tab-item} macOS ```shell subs=true curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-darwin-x86_64.tar.gz tar xzvf elastic-agent-{{version.stack}}-darwin-x86_64.tar.gz ``` -:::::: +:::: -::::::{tab-item} Linux +::::{tab-item} Linux ```shell subs=true curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-linux-x86_64.tar.gz tar xzvf elastic-agent-{{version.stack}}-linux-x86_64.tar.gz ``` -:::::: +:::: -::::::{tab-item} Windows +::::{tab-item} Windows ```powershell subs=true # PowerShell 5.0+ @@ -91,10 +87,9 @@ Expand-Archive .\elastic-agent-{{version.stack}}-windows-x86_64.zip ``` +:::: -:::::: - -::::::{tab-item} DEB +::::{tab-item} DEB :::{tip} To simplify upgrading to future versions of Elastic Agent, use the tarball distribution instead of the RPM distribution. @@ -105,9 +100,9 @@ You can install Elastic Agent in an unprivileged mode that does not require root curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-amd64.deb sudo dpkg -i elastic-agent-{{version.stack}}-amd64.deb ``` -:::::: +:::: -::::::{tab-item} RPM +::::{tab-item} RPM :::{tip} To simplify upgrading to future versions of Elastic Agent, use the tarball distribution instead of the RPM distribution. @@ -118,44 +113,44 @@ You can install Elastic Agent in an unprivileged mode that does not require root curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{version.stack}}-x86_64.rpm sudo rpm -vi elastic-agent-{{version.stack}}-x86_64.rpm ``` -:::::: +:::: -::::::: +::::: ### Step 2: Install and start the {{agent}} [logs-stream-install-agent] After downloading and extracting the installation package, you're ready to install the {{agent}}. From the agent directory, run the install command that corresponds with your system: -::::{note} +:::{note} On macOS, Linux (tar package), and Windows, run the `install` command to install and start {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd. For these systems, you must enable and start the service. -:::: +::: -:::::::{tab-set} +:::::{tab-set} -::::::{tab-item} macOS -::::{tip} +::::{tab-item} macOS +:::{tip} You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: +::: ```shell sudo ./elastic-agent install ``` -:::::: +:::: -::::::{tab-item} Linux -::::{tip} +::::{tab-item} Linux +:::{tip} You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: +::: ```shell sudo ./elastic-agent install ``` -:::::: +:::: -::::::{tab-item} Windows +::::{tab-item} Windows Open a PowerShell prompt as an Administrator (right-click the PowerShell icon and select **Run As Administrator**). From the PowerShell prompt, change to the directory where you installed {{agent}}, and run: @@ -163,12 +158,12 @@ From the PowerShell prompt, change to the directory where you installed {{agent} ```shell .\elastic-agent.exe install ``` -:::::: +:::: -::::::{tab-item} DEB -::::{tip} +::::{tab-item} DEB +:::{tip} You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: +::: ```shell @@ -176,13 +171,13 @@ sudo systemctl enable elastic-agent <1> sudo systemctl start elastic-agent ``` -1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. -:::::: +1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don't have systemd, run `sudo service elastic-agent start`. +:::: -::::::{tab-item} RPM -::::{tip} +::::{tab-item} RPM +:::{tip} You must run this command as the root user because some integrations require root privileges to collect sensitive data. -:::: +::: ```shell @@ -190,11 +185,12 @@ sudo systemctl enable elastic-agent <1> sudo systemctl start elastic-agent ``` -1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. -:::::: +1. The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don't have systemd, run `sudo service elastic-agent start`. +:::: + +::::: -::::::: -During installation, you’re prompted with some questions: +During installation, you're prompted with some questions: 1. When asked if you want to install the agent as a service, enter `Y`. 2. When asked if you want to enroll the agent in Fleet, enter `n`. @@ -207,41 +203,41 @@ With your agent installed, configure it by updating the `elastic-agent.yml` file #### Locate your configuration file [logs-stream-yml-location] -After installing the agent, you’ll find the `elastic-agent.yml` in one of the following locations according to your system: +After installing the agent, you'll find the `elastic-agent.yml` in one of the following locations according to your system: -:::::::{tab-set} +::::{tab-set} -::::::{tab-item} macOS +:::{tab-item} macOS Main {{agent}} configuration file location: `/Library/Elastic/Agent/elastic-agent.yml` -:::::: +::: -::::::{tab-item} Linux +:::{tab-item} Linux Main {{agent}} configuration file location: `/opt/Elastic/Agent/elastic-agent.yml` -:::::: +::: -::::::{tab-item} Windows +:::{tab-item} Windows Main {{agent}} configuration file location: `C:\Program Files\Elastic\Agent\elastic-agent.yml` -:::::: +::: -::::::{tab-item} DEB +:::{tab-item} DEB Main {{agent}} configuration file location: `/etc/elastic-agent/elastic-agent.yml` -:::::: +::: -::::::{tab-item} RPM +:::{tab-item} RPM Main {{agent}} configuration file location: `/etc/elastic-agent/elastic-agent.yml` -:::::: +::: -::::::: +:::: #### Update your configuration file [logs-stream-example-config] @@ -266,18 +262,18 @@ inputs: Next, set the values for these fields: -* `hosts` – Copy the {{es}} endpoint from **Help menu (![help icon](/solutions/images/observability-help-icon.svg "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. -* `api-key` – Use an API key to grant the agent access to {{es}}. To create an API key for your agent, refer to the [Create API keys for standalone agents](/reference/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) documentation. +* `hosts`: Copy the {{es}} endpoint from **Help menu (![help icon](/solutions/images/observability-help-icon.svg "")) → Connection details**. For example, `https://my-deployment.es.us-central1.gcp.cloud.es.io:443`. +* `api-key`: Use an API key to grant the agent access to {{es}}. To create an API key for your agent, refer to the [Create API keys for standalone agents](/reference/fleet/grant-access-to-elasticsearch.md#create-api-key-standalone-agent) documentation. - ::::{note} + :::{note} The API key format should be `:`. Make sure you selected **Beats** when you created your API key. Base64 encoded API keys are not currently supported in this configuration. - :::: + ::: -* `inputs.id` – A unique identifier for your input. -* `type` – The type of input. For collecting logs, set this to `filestream`. -* `streams.id` – A unique identifier for your stream of log data. -* `data_stream.dataset` – The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`. -* `paths` – The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`. +* `inputs.id`: A unique identifier for your input. +* `type`: The type of input. For collecting logs, set this to `filestream`. +* `streams.id`: A unique identifier for your stream of log data. +* `data_stream.dataset`: The name for your dataset data stream. Name this data stream anything that signifies the source of the data. In this configuration, the dataset is set to `example`. The default value is `generic`. +* `paths`: The path to your log files. You can also use a pattern like `/var/log/your-logs.log*`. #### Restart the {{agent}} [logs-stream-restart-agent] @@ -286,41 +282,41 @@ After updating your configuration file, you need to restart the {{agent}}: First, stop the {{agent}} and its related executables using the command that works with your system: -:::::::{tab-set} +:::::{tab-set} -::::::{tab-item} macOS +::::{tab-item} macOS ```shell sudo launchctl unload /Library/LaunchDaemons/co.elastic.elastic-agent.plist ``` -::::{note} +:::{note} {{agent}} will restart automatically if the system is rebooted. +::: :::: -:::::: -::::::{tab-item} Linux +::::{tab-item} Linux ```shell sudo service elastic-agent stop ``` -::::{note} +:::{note} {{agent}} will restart automatically if the system is rebooted. +::: :::: -:::::: -::::::{tab-item} Windows +::::{tab-item} Windows ```shell Stop-Service Elastic Agent ``` If necessary, use Task Manager on Windows to stop {{agent}}. This will kill the `elastic-agent` process and any sub-processes it created (such as {{beats}}). -::::{note} +:::{note} {{agent}} will restart automatically if the system is rebooted. +::: :::: -:::::: -::::::{tab-item} DEB +::::{tab-item} DEB The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. Use `systemctl` to stop the agent: @@ -335,12 +331,12 @@ Otherwise, use: sudo service elastic-agent stop ``` -::::{note} +:::{note} {{agent}} will restart automatically if the system is rebooted. +::: :::: -:::::: -::::::{tab-item} RPM +::::{tab-item} RPM The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. Use `systemctl` to stop the agent: @@ -355,35 +351,35 @@ Otherwise, use: sudo service elastic-agent stop ``` -::::{note} +:::{note} {{agent}} will restart automatically if the system is rebooted. +::: :::: -:::::: +::::: -::::::: Next, restart the {{agent}} using the command that works with your system: -:::::::{tab-set} +::::{tab-set} -::::::{tab-item} macOS +:::{tab-item} macOS ```shell sudo launchctl load /Library/LaunchDaemons/co.elastic.elastic-agent.plist ``` -:::::: +::: -::::::{tab-item} Linux +:::{tab-item} Linux ```shell sudo service elastic-agent start ``` -:::::: +::: -::::::{tab-item} Windows +:::{tab-item} Windows ```shell Start-Service Elastic Agent ``` -:::::: +::: -::::::{tab-item} DEB +:::{tab-item} DEB The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. Use `systemctl` to start the agent: @@ -397,9 +393,9 @@ Otherwise, use: ```shell sudo service elastic-agent start ``` -:::::: +::: -::::::{tab-item} RPM +:::{tab-item} RPM The RPM package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. Use `systemctl` to start the agent: @@ -413,18 +409,18 @@ Otherwise, use: ```shell sudo service elastic-agent start ``` -:::::: +::: -::::::: +:::: ## Troubleshoot your {{agent}} configuration [logs-stream-troubleshooting] -If you’re not seeing your log files in the UI, verify the following in the `elastic-agent.yml` file: +If you're not seeing your log files in the UI, verify the following in the `elastic-agent.yml` file: * The path to your logs file under `paths` is correct. -* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you’ll need to create an API key in **Beats** format. +* Your API key is in `:` format. If not, your API key may be in an unsupported format, and you'll need to create an API key in **Beats** format. -If you’re still running into issues, see [{{agent}} troubleshooting](/troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](/reference/fleet/configure-standalone-elastic-agents.md). +If you're still running into issues, see [{{agent}} troubleshooting](/troubleshoot/ingest/fleet/common-problems.md) and [Configure standalone Elastic Agents](/reference/fleet/configure-standalone-elastic-agents.md). ## Next steps [logs-stream-next-steps] diff --git a/solutions/toc.yml b/solutions/toc.yml index 40006f2634..1a8145a8ad 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -398,6 +398,7 @@ toc: children: - file: observability/logs/get-started-with-system-logs.md - file: observability/logs/stream-any-log-file.md + - file: observability/logs/stream-any-log-file-using-edot-collector.md - file: observability/logs/stream-application-logs.md children: - file: observability/logs/plaintext-application-logs.md From e11ab3ba7af4f69452b06b500d23ed2e4a8b14b0 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 28 Aug 2025 08:25:22 +0200 Subject: [PATCH 4/7] Update solutions/observability/logs.md Co-authored-by: Mike Birnstiehl <114418652+mdbirnstiehl@users.noreply.github.com> --- solutions/observability/logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md index 3358631284..5b80815da1 100644 --- a/solutions/observability/logs.md +++ b/solutions/observability/logs.md @@ -20,7 +20,7 @@ Elastic Observability allows you to deploy and manage logs at a petabyte scale, * [Run pattern analysis on log data](/solutions/observability/logs/run-pattern-analysis-on-log-data.md): Find patterns in unstructured log messages and make it easier to examine your data. * [Troubleshoot logs](/troubleshoot/observability/troubleshoot-logs.md): Find solutions for errors you might encounter while onboarding your logs. -## Send logs data to your project [observability-log-monitoring-send-logs-data-to-your-project] +## Send log data to your project [observability-log-monitoring-send-logs-data-to-your-project] You can send logs data to your project in different ways depending on your needs. When choosing between these options, consider the different features and functionalities between them. From e93ebb52cd118d573f5281a50ad8d40416e7837b Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 28 Aug 2025 08:25:50 +0200 Subject: [PATCH 5/7] Update solutions/observability/logs.md Co-authored-by: Mike Birnstiehl <114418652+mdbirnstiehl@users.noreply.github.com> --- solutions/observability/logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/observability/logs.md b/solutions/observability/logs.md index 5b80815da1..21fe25b89a 100644 --- a/solutions/observability/logs.md +++ b/solutions/observability/logs.md @@ -22,7 +22,7 @@ Elastic Observability allows you to deploy and manage logs at a petabyte scale, ## Send log data to your project [observability-log-monitoring-send-logs-data-to-your-project] -You can send logs data to your project in different ways depending on your needs. When choosing between these options, consider the different features and functionalities between them. +You can send log data to your project in different ways depending on your needs. When choosing between these options, consider the different features and functionalities between them. Refer to [Ingest tools overview](/manage-data/ingest/tools.md) for more information on which option best fits your situation. From 1ae3fe6e771a869cef45bc203b64c32f811a39ba Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 28 Aug 2025 08:32:01 +0200 Subject: [PATCH 6/7] Update solutions/observability/logs/stream-any-log-file-using-edot-collector.md Co-authored-by: Mike Birnstiehl <114418652+mdbirnstiehl@users.noreply.github.com> --- .../logs/stream-any-log-file-using-edot-collector.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/observability/logs/stream-any-log-file-using-edot-collector.md b/solutions/observability/logs/stream-any-log-file-using-edot-collector.md index 83fec71534..ce03b40cec 100644 --- a/solutions/observability/logs/stream-any-log-file-using-edot-collector.md +++ b/solutions/observability/logs/stream-any-log-file-using-edot-collector.md @@ -315,7 +315,7 @@ Key configuration elements: * `receivers.filelog/platformlogs.include`: Specifies the path to your log files. You can use patterns like `/var/log/*.log`. * `processors.resourcedetection`: Automatically detects and adds host system information to your logs. -* `extensions.file_storage`: Provides persistent storage for the collector's state. +* `extensions.file_storage`: Provides persistent storage for the Collector's state. * `exporters`: Configures how data is sent to Elasticsearch (Elastic Stack) or OTLP endpoint (Serverless). ::::: From aeb0169d5f34508aed0c82097b90dcedea370f20 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri Benedetti Date: Thu, 28 Aug 2025 08:38:24 +0200 Subject: [PATCH 7/7] Add link --- .../logs/stream-any-log-file-using-edot-collector.md | 1 + solutions/observability/logs/stream-any-log-file.md | 1 + 2 files changed, 2 insertions(+) diff --git a/solutions/observability/logs/stream-any-log-file-using-edot-collector.md b/solutions/observability/logs/stream-any-log-file-using-edot-collector.md index 83fec71534..c2b05ac162 100644 --- a/solutions/observability/logs/stream-any-log-file-using-edot-collector.md +++ b/solutions/observability/logs/stream-any-log-file-using-edot-collector.md @@ -362,6 +362,7 @@ If you're still running into issues, see [EDOT Collector troubleshooting](/troub After you have your EDOT Collector configured and are streaming log data to {{es}}: +* Refer to the [Explore log data](/solutions/observability/logs/discover-logs.md) documentation for information on exploring your log data in the UI, including searching and filtering your log data, getting information about the structure of log fields, and displaying your findings in a visualization. * Refer to the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. * Refer to the [Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. * To collect telemetry from applications and use the EDOT Collector as a gateway, instrument your target applications following the setup instructions: diff --git a/solutions/observability/logs/stream-any-log-file.md b/solutions/observability/logs/stream-any-log-file.md index 761712e8ce..fca1f02358 100644 --- a/solutions/observability/logs/stream-any-log-file.md +++ b/solutions/observability/logs/stream-any-log-file.md @@ -427,5 +427,6 @@ If you're still running into issues, see [{{agent}} troubleshooting](/troublesho After you have your agent configured and are streaming log data to {{es}}: +* Refer to the [Explore log data](/solutions/observability/logs/discover-logs.md) documentation for information on exploring your log data in the UI, including searching and filtering your log data, getting information about the structure of log fields, and displaying your findings in a visualization. * Refer to the [Parse and organize logs](/solutions/observability/logs/parse-route-logs.md) documentation for information on extracting structured fields from your log data, rerouting your logs to different data streams, and filtering and aggregating your log data. * Refer to the [Filter and aggregate logs](/solutions/observability/logs/filter-aggregate-logs.md) documentation for information on filtering and aggregating your log data to find specific information, gain insight, and monitor your systems more efficiently. \ No newline at end of file