diff --git a/content/en/observability_pipelines/sources/datadog_agent.md b/content/en/observability_pipelines/sources/datadog_agent.md index d342b16628994..b1b49805d3264 100644 --- a/content/en/observability_pipelines/sources/datadog_agent.md +++ b/content/en/observability_pipelines/sources/datadog_agent.md @@ -5,6 +5,8 @@ disable_toc: false Use Observability Pipelines' Datadog Agent source to receive logs from the Datadog Agent. Select and set up this source when you [set up a pipeline][1]. +**Note**: If you are using the Datadog Distribution of OpenTelemetry (DDOT) Collector, you must [use the OpenTelemetry source to send logs to Observability Pipelines][4]. + ## Prerequisites {{% observability_pipelines/prerequisites/datadog_agent %}} @@ -38,4 +40,5 @@ Use the Agent configuration file or the Agent Helm chart values file to connect [1]: /observability_pipelines/configuration/set_up_pipelines/ [2]: /containers/docker/log/?tab=containerinstallation#linux -[3]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables \ No newline at end of file +[3]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables +[4]: /observability_pipelines/sources/opentelemetry/#send-logs-from-the-datadog-distribution-of-opentelemetry-collector-to-observability-pipelines \ No newline at end of file diff --git a/content/en/observability_pipelines/sources/opentelemetry.md b/content/en/observability_pipelines/sources/opentelemetry.md index ea4a37ba542b3..d908fd8b1edf7 100644 --- a/content/en/observability_pipelines/sources/opentelemetry.md +++ b/content/en/observability_pipelines/sources/opentelemetry.md @@ -1,5 +1,5 @@ --- -title: OpenTelemetry +title: OpenTelemetry Source disable_toc: false --- @@ -7,6 +7,10 @@ disable_toc: false Use Observability Pipelines' OpenTelemetry (OTel) source to collect logs from your OTel Collector through HTTP or gRPC. Select and set up this source when you set up a pipeline. The information below is configured in the pipelines UI. +**Notes**: +- If you are using the Datadog Distribution of OpenTelemetry (DDOT) Collector, use the OpenTelemetry source to [send logs to Observability Pipelines](#send-logs-from-the-datadog-distribution-of-opentelemetry-collector-to-observability-pipelines). +- If you are using the Splunk HEC Distribution of the OpenTelemetry Collector, use the [Splunk HEC source][4] to send logs to Observability Pipelines. + ### When to use this source Common scenarios when you might use this source: @@ -70,9 +74,48 @@ The Worker exposes the gRPC endpoint on port 4318. This is an example of configu Based on these example configurations, these are values you enter for the following environment variables: -- HTTP listener address: `worker:4317` -- gRPC listener address: `worker:4318` +- HTTP listener address: `worker:4318` +- gRPC listener address: `worker:4317` + +## Send logs from the Datadog Distribution of OpenTelemetry Collector to Observability Pipelines + +To send logs from the Datadog Distribution of the OpenTelemetry (DDOT) Collector: +1. Deploy the DDOT Collector using Helm. See [Install the DDOT Collector as a Kubernetes DaemonSet][5] for instructions. +1. [Set up a pipeline][6] on Observability Pipelines using the [OpenTelemetry source](#set-up-the-source-in-the-pipeline-ui). + 1. (Optional) Datadog recommends adding an [Edit Fields processor][7] to the pipeline that appends the field `op_otel_ddot:true`. + 1. When you install the Worker, for the OpenTelemetry source environment variables: + 1. Set your HTTP listener to `0.0.0.0:4318`. + 1. Set your gRPC listener to `0.0.0.0:4317`. + 1. After you install the Worker and deployed the pipeline, update the OpenTelemetry Collector's [`otel-config.yaml`][9] to include an exporter that sends logs to Observability Pipelines. For example: + ``` + exporters: + otlphttp: + endpoint: http://opw-observability-pipelines-worker.default.svc.cluster.local:4318 + ... + service: + pipelines: + logs: + exporters: [otlphttp] + ``` + 1. Redeploy the Datadog Agent with the updated [`otel-config.yaml`][9]. For example, if the Agent is installed in Kubernetes: + ``` + helm upgrade --install datadog-agent datadog/datadog \ + --values ./agent.yaml \ + --set-file datadog.otelCollector.config=./otel-config.yaml + ``` + +**Notes**: +- Because DDOT is sending logs to Observability Pipelines, and not the Datadog Agent, the following settings do not work for sending logs from DDOT to Observability Pipelines: + - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED` + - `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL` +- Logs sent from DDOT might have nested objects that prevent Datadog from parsing the logs correctly. To resolve this, Datadog recommends using the [Custom Processor][8] to flatten the nested `resource` object. [1]: https://opentelemetry.io/docs/collector/ [2]: /observability_pipelines/sources/ -[3]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options \ No newline at end of file +[3]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options +[4]: /observability_pipelines/sources/splunk_hec/#send-logs-from-the-splunk-distributor-of-the-opentelemetry-collector-to-observability-pipelines +[5]: /opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=datadogoperator +[6]: /observability_pipelines/configuration/set_up_pipelines/ +[7]: /observability_pipelines/processors/edit_fields#add-field +[8]: /observability_pipelines/processors/custom_processor +[9]: https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=helm#configure-the-opentelemetry-collector \ No newline at end of file diff --git a/content/en/observability_pipelines/sources/splunk_hec.md b/content/en/observability_pipelines/sources/splunk_hec.md index d14d93da17338..9ef32ba8b937b 100644 --- a/content/en/observability_pipelines/sources/splunk_hec.md +++ b/content/en/observability_pipelines/sources/splunk_hec.md @@ -5,6 +5,8 @@ disable_toc: false Use Observability Pipelines' Splunk HTTP Event Collector (HEC) source to receive logs from your Splunk HEC. Select and set up this source when you [set up a pipeline][1]. +**Note**: Use the Splunk HEC source if you want to [send logs from the Splunk Distribution of the OpenTelemetry Collector to Observability Pipelines](#send-logs-from-the-splunk-distribution-of-the-opentelemetry-collector-to-observability-pipelines). + ## Prerequisites {{% observability_pipelines/prerequisites/splunk_hec %}} @@ -21,4 +23,31 @@ Select and set up this source when you [set up a pipeline][1]. The information b {{% observability_pipelines/log_source_configuration/splunk_hec %}} +## Send logs from the Splunk Distribution of the OpenTelemetry Collector to Observability Pipelines + +To send logs from the Splunk Distribution of the OpenTelemetry Collector: + +1. Install the Splunk OpenTelemetry Collector based on your environment: + - [Kubernetes][2] + - [Linux][3] +1. [Set up a pipeline][4] using the [Splunk HEC source](#set-up-the-source-in-the-pipeline-ui). +1. Configure the Splunk OpenTelemetry Collector: + ```bash + cp /etc/otel/collector/splunk-otel-collector.conf.example etc/otel/collector/splunk-otel-collector.conf + ``` + ```bash + # Splunk HEC endpoint URL, if forwarding to Splunk Observability Cloud + # SPLUNK_HEC_URL=https://ingest.us0.signalfx.com/v1/log + # If you're forwarding to a Splunk Enterprise instance running on example.com, with HEC at port 8088: + SPLUNK_HEC_URL=http://:8088/services/collector + ``` + - `` is the IP or URL of the host (or load balancer) associated with the Observability Pipelines Worker. + - For CloudFormation installs, the `LoadBalancerDNS` CloudFormation output has the correct URL to use. + - For Kubernetes installs, the internal DNS record of the Observability Pipelines Worker service can be used, for example `opw-observability-pipelines-worker.default.svc.cluster.local`. + +**Note**: If you are using a firewall, make sure your firewall allows traffic from the Splunk OpenTelemetry Collector to the Worker. + [1]: /observability_pipelines/configuration/set_up_pipelines/ +[2]: https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentelemetry-collector/get-started-with-the-splunk-distribution-of-the-opentelemetry-collector/collector-for-kubernetes +[3]: https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentelemetry-collector/get-started-with-the-splunk-distribution-of-the-opentelemetry-collector/collector-for-linux +[4]: /observability_pipelines/configuration/set_up_pipelines