Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Set up your pipelines and its sources, processors, and destinations in the Obser
If you want to add another group of processors for a destination:
1. Click the plus sign (**+**) at the bottom of the existing processor group.
1. Click the name of the processor group to update it.
1. Optionally, enter a group filter. See [Filter Syntax][17] for more information.
1. Optionally, enter a group filter. See [Search Syntax][17] for more information.
1. Click **Add** to add processors to the group.
1. If you want to copy all processors in a group and paste them into the same processor group or a different group:
1. Click the three dots on the processor group.
Expand Down Expand Up @@ -125,7 +125,7 @@ After you have set up your pipeline, see [Update Existing Pipelines][11] if you
[14]: /monitors/types/metric/
[15]: /observability_pipelines/guide/environment_variables/
[16]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options
[17]: /observability_pipelines/processors/#filter-query-syntax
[17]: /observability_pipelines/search_syntax/

{{% /tab %}}
{{% tab "API" %}}
Expand Down
62 changes: 62 additions & 0 deletions content/en/observability_pipelines/destinations/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,68 @@ Use the Observability Pipelines Worker to send your processed logs and metrics (

Select a destination in the left navigation menu to see more information about it.

## Destinations

These are the available destinations:

{{< tabs >}}
{{% tab "Logs" %}}

- [Amazon OpenSearch][1]
- [Amazon S3][2]
- [Amazon Security Lake][3]
- [Azure Storage][4]
- [Datadog CloudPrem][5]
- [CrowdStrike Next-Gen SIEM][6]
- [Datadog Logs][7]
- [Elasticsearch][8]
- [Google Chronicle][9]
- [Google Cloud Storage][10]
- [Google Pub/Sub][11]
- [HTTP Client][12]
- [Kafka][13]
- [Microsoft Sentinel][14]
- [New Relic][15]
- [OpenSearch][16]
- [SentinelOne][17]
- [Socket][18]
- [Splunk HTTP Event Collector (HEC)][19]
- [Sumo Logic Hosted Collector][20]
- [Syslog][21]

[1]: /observability_pipelines/destinations/amazon_opensearch/
[2]: /observability_pipelines/destinations/amazon_s3/
[3]: /observability_pipelines/destinations/amazon_security_lake/
[4]: /observability_pipelines/destinations/azure_storage/
[5]: /observability_pipelines/destinations/cloudprem/
[6]: /observability_pipelines/destinations/crowdstrike_ng_siem/
[7]: /observability_pipelines/destinations/datadog_logs/
[8]: /observability_pipelines/destinations/elasticsearch/
[9]: /observability_pipelines/destinations/google_chronicle/
[10]: /observability_pipelines/destinations/google_cloud_storage/
[11]: /observability_pipelines/destinations/google_pubsub/
[12]: /observability_pipelines/destinations/http_client/
[13]: /observability_pipelines/destinations/kafka/
[14]: /observability_pipelines/destinations/microsoft_sentinel/
[15]: /observability_pipelines/destinations/new_relic/
[16]: /observability_pipelines/destinations/opensearch/
[17]: /observability_pipelines/destinations/sentinelone/
[18]: /observability_pipelines/destinations/socket/
[19]: /observability_pipelines/destinations/splunk_hec/
[20]: /observability_pipelines/destinations/sumo_logic_hosted_collector/
[21]: /observability_pipelines/destinations/syslog/

{{% /tab %}}

{{% tab "Metrics" %}}

- [Datadog Metrics][1]

[1]: /observability_pipelines/destinations/datadog_metrics/

{{% /tab %}}
{{< /tabs >}}

## Template syntax

Logs are often stored in separate indexes based on log data, such as the service or environment the logs are coming from or another log attribute. In Observability Pipelines, you can use template syntax to route your logs to different indexes based on specific log fields.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: Datadog Metrics
description: Learn how to set up the Datadog Metrics destination.
disable_toc: false
---

Use Observability Pipelines' Datadog Metrics destination to send metrics to Datadog. You can also use [AWS PrivateLink](#aws-privatelink) to send metrics from Observability Pipelines to Datadog.

## Setup

Set up the Datadog Metrics destination and its environment variables when you [set up a pipeline][1]. The information below is configured in the pipelines UI.

{{< img src="observability_pipelines/destinations/datadog_metrics_settings.png" alt="The Datadog Metrics destination settings" style="width:40%;" >}}

### Set up the destination

Optionally, toggle the switch to enable Buffering Options.
**Note**: Buffering options is in Preview. Contact your account manager to request access.

- If left disabled, the maximum size for buffering is 500 events.
- If enabled:
- Select the buffer type you want to set (Memory or Disk).
- Enter the buffer size and select the unit.

### Set then environment variables

No environment variables are required.

## How the destination works

A batch of events is flushed when one of these parameters is met. See [event batching][2] for more information.

| Max Events | Max Bytes | Timeout (seconds) |
|----------------|-----------------|---------------------|
| 100,000 | None | 2 |

## AWS PrivateLink

To send logs from Observability Pipelines to Datadog using AWS PrivateLink, see [Connect to Datadog over AWS PrivateLink][3] for setup instructions. The two endpoints you need to set up are:

- Metrics: {{< region-param key=metrics_endpoint_private_link code="true" >}}
- Remote Configuration: {{< region-param key=remote_config_endpoint_private_link code="true" >}}

**Note**: The `obpipeline-intake.datadoghq.com` endpoint is used for Live Capture and is not available as a PrivateLink endpoint.

[1]: https://app.datadoghq.com/observability-pipelines
[2]: https://docs.datadoghq.com/observability_pipelines/destinations/#event-batching
[3]: https://docs.datadoghq.com/agent/guide/private-link/?tab=crossregionprivatelinkendpoints
[4]: http://config.datadoghq.com
61 changes: 58 additions & 3 deletions content/en/observability_pipelines/processors/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,65 @@ Processor groups and the processors within each group are executed from top to b

**Note**: There is a limit of 10 processor groups for a pipeline canvas. For example, if you have a dual ship pipeline, where there are two destinations and each destination has its own set of processor groups, the combined number of processor groups from both sets is limited to 10.

{{% observability_pipelines/processors/filter_syntax %}}
## Processors

[1]: https://app.datadoghq.com/observability-pipelines
These are the available processors:

{{< tabs >}}
{{% tab "Logs" %}}

- [Add Environment Variables Processor][1]
- [Add Hostname Processor][2]
- [Custom Processor][3]
- [Deduplicate Processor][4]
- [Edit Fields Processor][5]
- [Enrichment Table Processor][6]
- [Filter Processor][7]
- [Generate Metrics Processor][8]
- [Grok Parser Processor][9]
- [Parse JSON Processor][10]
- [Parse XML Processor][11]
- [Quota Processor][12]
- [Reduce Processor][13]
- [Remap to OCSF Processor][14]
- [Sample Processor][15]
- [Sensitive Data Scanner Processor][16]
- [Split Array][17]
- [Tags][18]
- [Throttle][19]

[1]: /observability_pipelines/processors/add_environment_variables/
[2]: /observability_pipelines/processors/add_hostname/
[3]: /observability_pipelines/processors/custom_processor/
[4]: /observability_pipelines/processors/dedupe/
[5]: /observability_pipelines/processors/edit_fields/
[6]: /observability_pipelines/processors/enrichment_table/
[7]: /observability_pipelines/processors/filter/
[8]: /observability_pipelines/processors/generate_metrics/
[9]: /observability_pipelines/processors/grok_parser/
[10]: /observability_pipelines/processors/parse_json/
[11]: /observability_pipelines/processors/parse_xml/
[12]: /observability_pipelines/processors/quota/
[13]: /observability_pipelines/processors/reduce/
[14]: /observability_pipelines/processors/remap_ocsf/
[15]: /observability_pipelines/processors/sample/
[16]: /observability_pipelines/processors/sensitive_data_scanner/
[17]: /observability_pipelines/processors/split_array/
[18]: /observability_pipelines/processors/tags/
[19]: /observability_pipelines/processors/throttle/

{{% /tab %}}
{{% tab "Metrics" %}}

- [Filter][1]
- [Tag Control][2]

[1]: /observability_pipelines/processors/filter/
[2]: /observability_pipelines/processors/tag_control/

{{% /tab %}}
{{< /tabs >}}

## Further Reading

{{< partial name="whats-next/whats-next.html" >}}
{{< partial name="whats-next/whats-next.html" >}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: Tag Control
description: Learn how to use the Tag Control processor for metrics.
disable_toc: false
---
57 changes: 57 additions & 0 deletions content/en/observability_pipelines/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,63 @@ Use Observability Pipelines' sources to receive logs or metrics ({{< tooltip glo

Select a source in the left navigation menu to see more information about it.

## Sources

These are the available sources:

{{< tabs >}}
{{% tab "Logs" %}}

- [Amazon Data Firehose][1]
- [Amazon S3][2]
- [Azure Event Hubs][3]
- [Datadog Agent][4]
- [Filebeat][5]
- [Fluentd and Fluent Bit][6]
- [Google Pub/Sub][7]
- [HTTP Client][8]
- [HTTP Server][9]
- [Kafka][10]
- [Lambda Extension][11]
- [Lambda Forwarder][12]
- [Logstash][13]
- [OpenTelemetry][14]
- [Socket][15]
- [Splunk HTTP Event Collector (HEC)][16]
- [Splunk Heavy or Universal Forwarders (TCP)][17]
- [Sumo Logic Hosted Collector][18]
- [Syslog][19]

[1]: /observability_pipelines/sources/amazon_data_firehose/
[2]: /observability_pipelines/sources/amazon_s3/
[3]: /observability_pipelines/sources/azure_event_hubs/
[4]: /observability_pipelines/sources/datadog_agent/
[5]: /observability_pipelines/sources/filebeat/
[6]: /observability_pipelines/sources/fluent/
[7]: /observability_pipelines/sources/google_pubsub/
[8]: /observability_pipelines/sources/http_client/
[9]: /observability_pipelines/sources/http_server/
[10]: /observability_pipelines/sources/kafka/
[11]: /observability_pipelines/sources/lambda_extension/
[12]: /observability_pipelines/sources/lambda_forwarder/
[13]: /observability_pipelines/sources/logstash/
[14]: /observability_pipelines/sources/opentelemetry/
[15]: /observability_pipelines/sources/socket/
[16]: /observability_pipelines/sources/splunk_hec/
[17]: /observability_pipelines/sources/splunk_tcp/
[18]: /observability_pipelines/sources/sumo_logic/
[19]: /observability_pipelines/sources/syslog/

{{% /tab %}}
{{% tab "Metrics" %}}

- [Datadog Agent][1]

[1]: /observability_pipelines/sources/datadog_agent/

{{% /tab %}}
{{< /tabs >}}

## Standard metadata fields

All sources add the following standard metadata fields to ingested events:
Expand Down
83 changes: 76 additions & 7 deletions content/en/observability_pipelines/sources/datadog_agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,20 +27,90 @@ Use Observability Pipelines' Datadog Agent source to receive logs from the Datad

## Connect the Datadog Agent to the Observability Pipelines Worker

{{< tabs >}}
{{% tab "Logs" %}}

Use the Agent configuration file or the Agent Helm chart values file to connect the Datadog Agent to the Observability Pipelines Worker.

**Note**: If your Agent is running in a Docker container, you must exclude Observability Pipelines logs using the `DD_CONTAINER_EXCLUDE_LOGS` environment variable. For Helm, use `datadog.containerExcludeLogs`. This prevents duplicate logs, as the Worker also sends its own logs directly to Datadog. See [Docker Log Collection][2] or [Setting environment variables for Helm][3] for more information.
**Note**: If your Agent is running in a Docker container, you must exclude Observability Pipelines logs using the `DD_CONTAINER_EXCLUDE_LOGS` environment variable. For Helm, use `datadog.containerExcludeLogs`. This prevents duplicate logs, as the Worker also sends its own logs directly to Datadog. See [Docker Log Collection][1] or [Setting environment variables for Helm][2] for more information.

{{< tabs >}}
{{% tab "Agent configuration file" %}}
{{% collapse-content title="Agent configuration file" level="h4" expanded=false id="id-for-anchoring" %}}

{{% observability_pipelines/log_source_configuration/datadog_agent %}}

{{% /tab %}}
{{% tab "Agent Helm values file" %}}
{{% /collapse-content %}}

{{% collapse-content title="Agent Helm value file" level="h4" expanded=false id="id-for-anchoring" %}}

{{% observability_pipelines/log_source_configuration/datadog_agent_kubernetes %}}

{{% /collapse-content %}}

[1]: /containers/docker/log/?tab=containerinstallation#linux
[2]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables

{{% /tab %}}

{{% tab "Metrics" %}}

Use the Agent configuration file or the Agent Helm chart values file to connect the Datadog Agent to the Observability Pipelines Worker.

**Note**: If your Agent is running in a Docker container, you must exclude Observability Pipelines metrics, such as utilization, events in and out metrics, using the `DD_CONTAINER_EXCLUDE_METRICS` environment variable. For Helm, use `datadog.containerExcludeMetrics`. This prevents duplicate metrics, as the Worker also sends its own metrics directly to Datadog. See [Docker Log Collection][1] or [Setting environment variables for Helm][2] for more information.

{{% collapse-content title="Agent configuration file" level="h4" expanded=false id="id-for-anchoring" %}}

To send Datadog Agent metrics to the Observability Pipelines Worker, update your [Agent configuration file][1] with the following:

```

observability_pipelines_worker:
metrics:
enabled: true
url: "http://<OPW_HOST>:8383"

```

`<OPW_HOST>` is the host IP address or the load balancer URL associated with the Observability Pipelines Worker.
\- For CloudFormation installs, use the `LoadBalancerDNS` CloudFormation output for the URL.
\- For Kubernetes installs, you can use the internal DNS record of the Observability Pipelines Worker service. For example: `http://opw-observability-pipelines-worker.default.svc.cluster.local:<PORT>`.

**Note**: If the Worker is listening for logs on port 8282, you must use another port for metrics, such as 8383.

After you restart the Agent, your observability data should be going to the Worker, processed by the pipeline, and delivered to Datadog.

[1]: https://github.com/DataDog/datadog-agent/blob/main/pkg/config/config_template.yaml

{{% /collapse-content %}}

{{% collapse-content title="Agent Helm values file" level="h4" expanded=false id="id-for-anchoring" %}}

To send Datadog Agent metrics to the Observability Pipelines Worker, update your Datadog Helm chart [datadog-values.yaml][1] with the following environment variables. See [Agent Environment Variables][2] for more information.

```

datadog:
env:
- name: DD_OBSERVABILITY_PIPELINES_WORKER_METRICS_ENABLED
value: true
- name: DD_OBSERVABILITY_PIPELINES_WORKER_METRICS_URL
value: "http://<OPW_HOST>:8383"

```

`<OPW_HOST>` is the host IP address or the load balancer URL associated with the Observability Pipelines Worker.

For Kubernetes installs, you can use the internal DNS record of the Observability Pipelines Worker service. For example: `http://opw-observability-pipelines-worker.default.svc.cluster.local:<PORT>`.

**Note**: If the Worker is listening for logs on port 8282, you must use another port for metrics, such as 8383\.

[1]: https://github.com/DataDog/helm-charts/blob/main/charts/datadog/values.yaml
[2]: https://docs.datadoghq.com/agent/guide/environment-variables/

{{% /collapse-content %}}

[1]: /containers/docker/log/?tab=containerinstallation#linux
[2]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables

{{% /tab %}}
{{< /tabs >}}

Expand All @@ -49,6 +119,5 @@ Use the Agent configuration file or the Agent Helm chart values file to connect
{{< partial name="whats-next/whats-next.html" >}}

[1]: /observability_pipelines/configuration/set_up_pipelines/
[2]: /containers/docker/log/?tab=containerinstallation#linux
[3]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables

[4]: /observability_pipelines/sources/opentelemetry/#send-logs-from-the-datadog-distribution-of-opentelemetry-collector-to-observability-pipelines
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading