Skip to content

Commit

Permalink
Merge pull request #6588 from newrelic/rhs-re-add-otel-jvm-info
Browse files Browse the repository at this point in the history
Re-add JVM section to OTEL docs
  • Loading branch information
rhetoric101 committed Mar 31, 2022
2 parents 5a7121f + aaff6f7 commit 391073c
Show file tree
Hide file tree
Showing 6 changed files with 56 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ metaDescription: Use our New Relic best-practices guide to optimize your OpenTel
redirects:
- /docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-logs
- /docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts
- /docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts
---

Here are some best practices based on how OpenTelemetry works with New Relic:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ redirects:
- /docs/integrations/open-source-telemetry-integrations/opentelemetry/introduction-opentelemetry
- /docs/integrations/open-source-telemetry-integrations/opencensus/opencensus-exporter
- /docs/integrations/open-source-telemetry-integrations/open-source-telemetry-integration-list/new-relics-opencensus-integration
- /docs/more-integrations/open-source-telemetry-integrations/opentelemetry/introduction-opentelemetry-new-relic
---

OpenTelemetry is a toolkit you can use to gather telemetry data from your applications and to export that data to New Relic. Once the data is in New Relic, you can use the New Relic platform to analyze the data and resolve application issues.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ tags:
metaDescription: The New Relic UI offers a lot of options for filtering and viewing data from OpenTelemetry.
redirects:
- /docs/integrations/open-source-telemetry-integrations/opentelemetry/view-your-opentelemetry-data-new-relic
- /docs/more-integrations/open-source-telemetry-integrations/opentelemetry/view-your-opentelemetry-data-new-relic
---

After you import OpenTelemetry data into New Relic, you can use a variety of tools to analyze it. Take a look at these UI options:
Expand Down Expand Up @@ -180,7 +181,7 @@ If you have span events, links for these appear in the right pane:
3. When you're in span events and only want to view exceptions, slide the toggle **Only show exceptions**.

![Screenshot showing span events and how you can filter just for exceptions](./images/span-events-exceptions.png "Screenshot showing span events and how you can filter just for exceptions")

<Callout variant="tip">
OpenTelemetry exceptions handled by the app/service are displayed independently of span error status and are not necessarily associated with a span error status.
</Callout>
Expand Down Expand Up @@ -221,7 +222,7 @@ For your data to appear in this section, make sure it has the following:
* `span.kind = client` or `producer`
* `db.system`
* Facets by `db.system`

</td>
</tr>
<tr>
Expand Down Expand Up @@ -262,12 +263,57 @@ For more details, see [External services](/docs/apm/apm-ui-pages/monitoring/exte

### JVMs [#jvms]

When you drill into a specific JVM, the UI display charts driven by JVM metric data. The Java specific runtime metrics are not well documented. The [implementation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/runtime-metrics/library/src/main/java/io/opentelemetry/instrumentation/runtimemetrics) is effectively the documentation and may be subject to change.

For your data to appear in this section, make sure it has the following:

* A unique `service.instance.id` attribute for rendering the list of JVMs
* An OpenTelemetry resource attribute `service.instance.id`
The JVMs page for services instrumented with OpenTelemetry allows you to identify which service instances have unusual or unhealthy performance patterns. You can choose several service instances to compare based on summaries of key metrics: response time, throughput, error rate, garbage collection time, and memory usage. Then, you can compare all those instances' JVM metrics collected by OpenTelemetry instrumentation using timeseries charts to spot problems.

Here's a typical workflow:

1. Click **JVMs**.
2. Find interesting JVMs using the table of summarized health metrics:
* Use the filter bar to narrow down your search.
* Sort to find outliers.
3. Select those interesting JVMs.
4. Click **Compare** to see a display of the health and runtime metrics faceted by JVM.

Review these additional topics about using the JVMs page:

<CollapserGroup>
<Collapser
className="freq-link"
id="metric-details"
title="Runtime metric details"
>
When you drill into a specific JVM, the UI display charts driven by JVM metric data. The Java specific runtime metrics are not well documented. The [implementation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/runtime-metrics/library/src/main/java/io/opentelemetry/instrumentation/runtimemetrics) is effectively the documentation and may be subject to change.
</Collapser>
<Collapser
className="freq-link"
id="config-steps"
title="How to ensure your data appears"
>
For your data to appear in this section, make sure it has the following:
* A unique `service.instance.id` attribute for rendering the list of JVMs.
* An OpenTelemetry resource attribute `service.instance.id`.
</Collapser>
<Collapser
className="freq-link"
id="jvms-and-metric-types"
title="Gauges versus counters"
>
Starting in OpenTelemetry Java agent 1.10.0, JVM memory usage switched from being collected as an async gauge to an async up down counter. This has implications on the exported data. Gauges and counters export differently:

* Async gauges export as OTLP gauges.
* Async up down counters export as OTLP non-monotonic sums.


If you configure your SDK to export your metrics using delta aggregation temporality (which is required for counter and histogram instruments to function with New Relic), that results in async up down counters exported as non-monotonic delta sums. New Relic can't perform any useful analysis of non-monotonic delta sum data.

The solution for now (until a better solution is sorted out in the OpenTelemetry metric specification) is to use the View API to indicate that async up down counters should be aggregated using [last value aggregation](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#last-value-aggregation) instead of the default sum aggregation. This results in JVM memory usage being exported as gauge data, which is required for a useful experience in New Relic.

The way you configure the View API varies based on whether you’re using the OpenTelemetry Java agent:

* If you're not using the OpenTelemetry Java agent, review this simple [example](https://github.com/newrelic/newrelic-opentelemetry-examples/pull/89/files#diff-da355ef6d1092534a55829e95160ab8468884bdd521f9018feeaaa66aea6ac5bR82-R86) that shows how to register a view when configuring `SdkMeterProvider`.
* If you’re using the OpenTelemetry Java agent, you need to configure the View API in an [extension](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/agent-config.md#extensions). Extensions allow you to hook into the SDK configuration (among other things) and apply programmatic configuration that isn’t available by environment variable or system properties. This [example](https://github.com/newrelic/newrelic-opentelemetry-examples/tree/main/java/agent-nr-config) demonstrates how you can use an extension to [customize](https://github.com/newrelic/newrelic-opentelemetry-examples/blob/main/java/agent-nr-config/config-extension/src/main/java/com/newrelic/otel/extension/Customizer.java#L28-L37) the `SdkMeterProvider`’s views.
</Collapser>
</CollapserGroup>

### Logs [#logs]

Expand Down

0 comments on commit 391073c

Please sign in to comment.