Skip to content

Commit

Permalink
chore(otel): style updates
Browse files Browse the repository at this point in the history
  • Loading branch information
ally-sassman committed May 6, 2024
1 parent 4ba65cd commit d451c5c
Show file tree
Hide file tree
Showing 5 changed files with 39 additions and 33 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ Check out this documentation about how to configure different types of sampling:
<Collapser
className="freq-link"
id="infinite-tracing"
title="New Relic tail-based sampling with Infinite Tracing"
title="New Relic tail-based sampling with infinite tracing"
>
Infinite Tracing is New Relic's tail-based sampling option. You can use this in conjunction with your OpenTelemetry instrumented services. In setting up Infinite Tracing, you need to configure applications (or the collector) to export trace data to the New Relic trace observer using OTLP gRPC:
Infinite tracing is New Relic's tail-based sampling option. You can use this in conjunction with your OpenTelemetry instrumented services. In setting up Infinite tracing, you need to configure applications (or the collector) to export trace data to the New Relic trace observer using OTLP gRPC:

1. Follow the steps in [Set up the trace observer](/docs/distributed-tracing/infinite-tracing/set-trace-observer/) to get the value for `YOUR_TRACE_OBSERVER_URL`.
2. As you complete the steps in the [quick start guide](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-quick-start/#review-settings), use the value of `YOUR_TRACE_OBSERVER_URL` to configure your integration. `YOUR_TRACE_OBSERVER_URL` follows the form `https://{trace-observer}:443/trace/v1`. When setting the OTLP gRPC endpoint, strip off the `/trace/v1` suffix, resulting in a URL of the form `https://{trace-observer}:443`.
Expand Down Expand Up @@ -158,4 +158,4 @@ NRQL query representation

The most important thing to notice with resource attributes is the potential difference in the size of the payload being sent compared to what is stored in NRDB. All resource attribute values will be applied to every span in the OTLP payload. The example above only shows a single span being sent but if the payload contained 100 spans, each one of them would have `process.command_line` and `service.name` applied to them.

For some Java based applications, the default `process.command_line` attribute can be thousands of characters long which may result in a significant and unexpected increase in billable bytes. If these resource attributes do not provide value they can be disabled by following the [OpenTelemetry and attribute lengths: Best Practices](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#attribute-limits)
For some Java based applications, the default `process.command_line` attribute can be thousands of characters long which may result in a significant and unexpected increase in billable bytes. If these resource attributes do not provide value they can be disabled by following the best practices described in [Attribute limits](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#attribute-limits).
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,14 @@ New Relic has supported [native OTLP ingest](/docs/more-integrations/open-source

Working through a support case can be time consuming and at times frustrating for customers (and for New Relic!). Therefore, we've put together this troubleshooting guide to help establish a shared understanding, and provide tools to self-diagnose and fix issues when possible.

First, please review the New Relic [OTLP configuration requirements / recommendations](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/). It contains essential advice and context that anyone looking to use OTLP with New Relic should be aware of.
First, please review [New Relic OTLP configuration requirements and recommendations](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/). It contains essential advice and context that anyone looking to use OTLP with New Relic should be aware of.

The [General Triaging](#general-triaging) section describes basic troubleshooting steps you should follow when you encounter some issue with OTLP.
Next, see the sections below:

The [Issues Catalog](#issue-catalog) lists a variety of different errors we've seen customers experience, with mitigation steps which often reference items from [OTLP configuration requirements / recommendations](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#configuration).
* [General troubleshooting](#general-troubleshooting): Troubleshooting for general OTLP issues
* [Issues catalog](#issue-catalog): Common customer issues and mitigation steps

## General Triaging
## General troubleshooting [#general-troubleshooting]

When you encounter an issue with the New Relic OTLP endpoint, first follow these basic troubleshooting steps. If you end up opening a support ticket, these are the first things we ask:

Expand All @@ -29,12 +30,12 @@ When you encounter an issue with the New Relic OTLP endpoint, first follow these
3. **Check for [`NrIntegrationErrors`](/docs/telemetry-data-platform/manage-data/nrintegrationerror/).** New Relic OTLP ingest performs minimal validation synchronously before returning a success status code. If you don't see indications of export errors in your application logs, but don't see data in New Relic, try querying for `NrIntegrationErrors`. There may be issues with your data which were detected during asynchronous validation.
4. **Determine if the problem is localized.** Often errors are localized to a specific application or environment. In these cases, it's useful to evaluate the differences between the areas which are problematic and properly functioning.
5. **Look for signs of invalid API key.** The New Relic OTLP endoint [requires setting an `api-key` header](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#api-key). Invalid or missing API keys are a common issue which present with HTTP 403 or 401 status codes, or gRPC Unauthenticated or PermissionDenied status codes. If you see these, check that your API key is valid and is being properly set.
6. **Check if the export succeeds after retry.** We [recommend that retry is enabled](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#retry) is enabled, and expect export attempts to occasionally initially fail with transient errors but succeed after retrying. However, we do have an [SLA](/docs/licenses/license-information/referenced-policies/service-level-availability-commitment/) - if you suspect that transient errors are frequent enough that they exceed our SLA, please open a support case.
6. **Check if the export succeeds after retry.** We [recommend that retry is enabled](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#retry), and expect export attempts to occasionally initially fail with transient errors but succeed after retrying. However, we do have an [SLA](/docs/licenses/license-information/referenced-policies/service-level-availability-commitment/). If you suspect that transient errors are frequent enough that they exceed our SLA, please open a support case.
7. **Check for signs that transient errors are not being retried.** Despite our best efforts, there may be corner cases where the New Relic OTLP endpoint returns non-retriable status codes for transient errors. If you believe you've encountered this scenario, please open a support case.

## Issue Catalog [#issue-catalog]
## Issue catalog [#issue-catalog]

The table below catalogs issues we've seen customers encounter with the New Relic OTLP endpoint. Many are straight forward to resolve with proper configuration. The "Fingerprint" column shows a typical log when an application encounters the particular class of issue. See the "Known Resolution" and "Notes" columns for mitigation steps.
The table below catalogs issues we've seen customers encounter with the New Relic OTLP endpoint. Many are straight forward to resolve with proper configuration. The **Fingerprint** column shows a typical log when an application encounters the particular class of issue. See the **Known resolution** and **Notes** columns for mitigation steps.

| OTLP Protocol Version | Type | Language / Ecosystem | Fingerprint | Known Resolution | Notes |
|---|---|---|---|---|---|
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: New Relic OTLP Endpoint
title: New Relic OTLP endpoint
tags:
- Integrations
- Open source telemetry integrations
Expand All @@ -19,12 +19,13 @@ redirects:

New Relic supports native OTLP ingest, and recommends it as the preferred method for sending OpenTelemetry data to the New Relic platform. This document delves into New Relic's OTLP support, including configuration requirements and recommendations.

## Before you begin [#prereqs]
## Before you begin [#before-you-begin]

* If you haven't already done so, sign up for a free [New Relic account](https://newrelic.com/signup).
* Get the [license key](https://one.newrelic.com/launcher/api-keys-ui.launcher) for the New Relic account to which you want to report data. This license key will be used to [configure the `api-key` header](#api-key).
* Review your OTLP version: New Relic uses [OTLP release v0.18.0](https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v0.18.0). Later versions are supported but new features are not yet implemented. Experimental features which were are no longer available in 0.18.0 are not supported.

## Configure the endpoint, port and protocol [#endpoint-port-protocol]
## Config: OTLP endpoint, port, and protocol [#configure-endpoint-port-protocol]

Requirement level: **Required**

Expand All @@ -36,7 +37,7 @@ Additionally, you should configure your OTLP exporter to use the [OTLP/HTTP bina

The mechanism to configure the endpoint will vary, but OpenTelemetry language SDKs generally support setting the `OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf` environment variable (see [OpenTelemetry docs](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) for more info).

If using the collector, prefer the [otlphttpexporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter).
If you're using a collector, we recommend using the [otlphttpexporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter).

<table>
<thead>
Expand All @@ -58,7 +59,7 @@ If using the collector, prefer the [otlphttpexporter](https://github.com/open-te
</th>

<th>
Supported Ports
Supported ports
</th>
</tr>
</thead>
Expand Down Expand Up @@ -135,7 +136,7 @@ If using the collector, prefer the [otlphttpexporter](https://github.com/open-te

<tr>
<td>
Infinite Tracing<br/>(See [best practices](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts/#infinite-tracing) for endpoint details
Infinite tracing<br/>(See [best practices](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-best-practices-traces#infinite-tracing) for endpoint details)
</td>

<td>
Expand Down Expand Up @@ -170,33 +171,40 @@ If using the collector, prefer the [otlphttpexporter](https://github.com/open-te
id="note-endpoints"
title="Additional endpoint details"
>
Per the [OpenTelemetry spec](https://opentelemetry.io/docs/specs/otel/protocol/exporter/#endpoint-urls-for-otlphttp) on endpoint URLs for OTLP/HTTP, if you are sending OTLP/HTTP traffic and using the signal agnostic environment variable (`OTEL_EXPORTER_OTLP_ENDPOINT`), you can simply set `OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:{port}` and the exporter should append the appropriate path for the signal type (i.e., `v1/traces` or `v1/metrics`).

If you are using a signal-specific environment variable (i.e., `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` and/or `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT`), you must include the appropriate path. For example, `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://otlp.nr-data.net:4318/v1/traces`. Not doing so will result in a 404. Note that this signal-specific environment variables take precedence over signal-agnostic environment variables.
Per the [OpenTelemetry doc](https://opentelemetry.io/docs/specs/otel/protocol/exporter/#endpoint-urls-for-otlphttp) on endpoint URLs for OTLP/HTTP, if you are sending OTLP/HTTP traffic and using the signal agnostic environment variable (`OTEL_EXPORTER_OTLP_ENDPOINT`), you can simply set `OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:{port}` and the exporter should append the appropriate path for the signal type (such as `v1/traces` or `v1/metrics`).

If you are using a signal-specific environment variable (such as `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` and `OTEL_EXPORTER_OTLP_METRICS_ENDPOINT`), you must include the appropriate path. For example:

```
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://otlp.nr-data.net:4318/v1/traces
```

If the path doesn't follow the syntax above, you will receive a `404` error. Note that signal-specific environment variables take precedence over signal-agnostic environment variables.
</Collapser>
</CollapserGroup>

## Config: TLS [#tls]
## Config: TLS encryption [#tls]

Requirement level: **Required**

In order to send OTLP data to New Relic, you must configure your OTLP exporter to use TLS 1.2 (see [TLS encryption](docs/new-relic-solutions/get-started/networks/#tls) for more information). Generally, SDK and collector exporters meet this requirement by default.
In order to send OTLP data to New Relic, you must configure your OTLP exporter to use TLS 1.2 (see [TLS encryption](/docs/new-relic-solutions/get-started/networks/#tls) for more information). Generally, SDK and collector exporters meet this requirement by default.

While many OTLP exporters infer TLS settings from the `https` endpoint scheme, some gRPC exporters may require you to explicitly enable TLS. The mechanism to configure gRPC TLS will vary, but OpenTelemetry language SDKs generally support setting the `OTEL_EXPORTER_OTLP_INSECURE=false` environment variable (see [OpenTelemetry docs](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) for more info).

### Config: Setting the API Key [#api-key]
## Config: Setting the API key [#api-key]

Requirement level: **Required**

In order to send OTLP data to New Relic, you must configure your OTLP exporter to include a header named `api-key` with the value set to your [license key](#prereqs). Failure to do so will result in authentication errors.

The mechanism to configure headers will vary, but OpenTelemetry language SDKs generally support setting the `OTEL_EXPORTER_OTLP_HEADERS=api-key=<INSERT_LICENSE_KEY>` environment variable (see [OpenTelemetry docs](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) for more info).

## Config: Attribute Limits [#attribute-limits]
## Config: Attribute limits [#attribute-limits]

Requirement level: **Required**

In order to send OTLP data to New Relic, you must configure your telemetry source to conform to New Relic attribute limits. Failure to do so may result in [`NrIntegrationErrors`](/docs/telemetry-data-platform/manage-data/nrintegrationerror/) during asynchronous data validation.
In order to send OTLP data to New Relic, you must configure your telemetry source to conform to New Relic attribute limits. Failure to do so may result in [`NrIntegrationError`](/docs/data-apis/manage-data/nrintegrationerror/) events during asynchronous data validation.

Attribute limits are as follows:

Expand All @@ -215,7 +223,7 @@ Notes:
- Resource attributes are subject to attribute limits, but there are no standard environment variables to limit them. If a resource attribute is over the allowed limit, consider truncating using the collector [transform processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor), or overwriting the resource attribute to another value.
- There is no standard mechanism to limit attribute names. However, instrumentation generally does not produce attribute names which exceed New Relic limits. If you encounter name length limits, remove the attribute using the collector.

## Config: Payload Batching, Compression and Rate Limits [#payload]
## Config: Payload batching, compression and rate limits [#payload-limits]

Requirement level: **Required**

Expand Down Expand Up @@ -249,7 +257,7 @@ The mechanism to configure retry will vary. Some OpenTelemetry SDKs may have lan

If using the collector, the `otlphttpexporter` and `otlpexporter` retry by default. See [exporterhelper](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/README.md) for more details.

## Config: Metric Aggregation Temporality
## Config: Metric aggregation temporality [#metric-aggregation-temporality]

Requirement level: **Recommended**

Expand All @@ -262,22 +270,19 @@ The mechanism to configure the endpoint will vary, but OpenTelemetry language SD

Cumulative temporality is used for instruments which map to [New Relic gauge types](https://docs.newrelic.com/docs/data-apis/understand-data/metric-data/metric-data-type/), and which are generally analyzed using the cumulative value.

## Config: Metric Histogram Aggregation
## Config: Metric mistogram aggregation [#metric-mistogram-aggregation]

Requirement level: **Recommended**

In order to send OTLP metric data to New Relic, you should configure your OTLP metrics exporter to aggregate measurements from histogram instruments to [exponential histograms](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#exponentialhistogram). In contrast to the static buckets used with the default explicit bucket histograms, exponential histograms automatically adjust their buckets to reflect the range of measurements recorded. Additionally, they use a highly compressed representation to send over the wire. Exponential histograms provide more useful distribution data in the New Relic platform.

The mechanism to configure the endpoint will vary, but OpenTelemetry language SDKs generally support setting the `OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION=base2_exponential_bucket_histogram` environment variable (see [OpenTelemetry docs](https://opentelemetry.io/docs/specs/otel/metrics/sdk_exporters/otlp/) for more info).

## OTLP Protocol Version

New Relic uses [OTLP release v0.18.0](https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v0.18.0). Later versions are supported but new features are not yet implemented. Experimental features which were are no longer available in 0.18.0 are not supported.

## OTLP Response Payloads
## OTLP response payloads [#payloads]

Please note the following details regarding New Relic OTLP endpoint response payloads:

* Successful responses from New Relic have no response body, instead of a [Protobuf-encoded response](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/protocol/otlp.md#success) based on the data type.
* New Relic responds after validation of authentication, payload size, and rate limiting. Validation of payload contents occurs asynchronously. Therefore, New Relic may return success status codes despite data ingestion ultimately failing and resulting in [`NrIntegrationErrors`](/docs/telemetry-data-platform/manage-data/nrintegrationerror/).
* New Relic responds after validation of authentication, payload size, and rate limiting. Validation of payload contents occurs asynchronously. Therefore, New Relic may return success status codes despite data ingestion ultimately failing and resulting in [`NrIntegrationError`](/docs/data-apis/manage-data/nrintegrationerror/) events.
* [Failure responses](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/protocol/otlp.md#failures) from New Relic do not include `Status.message` or `Status.details`.

0 comments on commit d451c5c

Please sign in to comment.