Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Opentelemetry spans names have changed when changed tracing_provider #3622

Open
3 of 5 tasks
juananinca opened this issue Aug 30, 2023 · 2 comments
Open
3 of 5 tasks
Labels
bug Something is not working.

Comments

@juananinca
Copy link

Preflight checklist

Ory Network Project

No response

Describe the bug

Since I changed to tracing_provider from "datadog" to "otel" the traces received in datadog has changed.
To do that I followed this guide https://docs.datadoghq.com/opentelemetry/otel_collector_datadog_exporter/?tab=onahost.

The env vars used to set the tracing configuration are:

TRACING_SERVICE_NAME="hydra"
TRACING_PROVIDER="otel"
TRACING_PROVIDERS_OTLP_INSECURE=true
TRACING_PROVIDERS_OTLP_SAMPLING_SAMPLING_RATIO=0.4
TRACING_PROVIDERS_OTLP_SERVER_URL="my.otel.collector:4318"
OTEL_RESOURCE_ATTRIBUTES="service.name=hydra,deployment.environment=pre,service.version=2.1.2"  

The otel collector image used is "otel/opentelemetry-collector-contrib:0.81.0" and the config used:

receivers:
  otlp:
    protocols:
      http:
      grpc:
  # The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.
  hostmetrics:
    collection_interval: 10s
    scrapers:
      paging:
        metrics:
          system.paging.utilization:
            enabled: true
      cpu:
        metrics:
          system.cpu.utilization:
            enabled: true
      disk:
      filesystem:
        metrics:
          system.filesystem.utilization:
            enabled: true
      load:
      memory:
      network:
      processes:
  # The prometheus receiver scrapes metrics needed for the OpenTelemetry Collector Dashboard.
  prometheus:
    config:
      scrape_configs:
      - job_name: 'otelcol'
        scrape_interval: 10s
        static_configs:
        - targets: ['0.0.0.0:8888']

processors:
  batch:
    send_batch_max_size: 100
    send_batch_size: 10
    timeout: 10s

exporters:
  logging:
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200
  datadog:
    api:
      site: datadoghq.eu
      key: MY_DATADOG_KEY

service:
  pipelines:
    metrics:
      receivers: [hostmetrics, prometheus, otlp]
      processors: [batch]
      exporters: [otlp/elastic]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/elastic]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/elastic]

One of the changes I have noticed is that the spans have changed.
Before:
old4 (2)
After:
new4 (2)

And the other changed I've noticed is that resource name requested has changed as well.
Before:
old3 (2)
After:
new3 (2)

I can't tell if both changes are related.

The previous version of hydra used was 1.11.7 and the new version is 2.1.2.

Reproducing the bug

1 - Set these env vars:

TRACING_SERVICE_NAME="hydra"
TRACING_PROVIDER="otel"
TRACING_PROVIDERS_OTLP_INSECURE=true
TRACING_PROVIDERS_OTLP_SAMPLING_SAMPLING_RATIO=0.4
TRACING_PROVIDERS_OTLP_SERVER_URL="my.otel.collector:4318"
OTEL_RESOURCE_ATTRIBUTES="service.name=hydra,deployment.environment=pre,service.version=2.1.2"  

2- Run a otel collector with this config:

receivers:
  otlp:
    protocols:
      http:
      grpc:
  # The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.
  hostmetrics:
    collection_interval: 10s
    scrapers:
      paging:
        metrics:
          system.paging.utilization:
            enabled: true
      cpu:
        metrics:
          system.cpu.utilization:
            enabled: true
      disk:
      filesystem:
        metrics:
          system.filesystem.utilization:
            enabled: true
      load:
      memory:
      network:
      processes:
  # The prometheus receiver scrapes metrics needed for the OpenTelemetry Collector Dashboard.
  prometheus:
    config:
      scrape_configs:
      - job_name: 'otelcol'
        scrape_interval: 10s
        static_configs:
        - targets: ['0.0.0.0:8888']

processors:
  batch:
    send_batch_max_size: 100
    send_batch_size: 10
    timeout: 10s

exporters:
  logging:
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200
  datadog:
    api:
      site: datadoghq.eu
      key: MY_DATADOG_KEY

service:
  pipelines:
    metrics:
      receivers: [hostmetrics, prometheus, otlp]
      processors: [batch]
      exporters: [otlp/elastic]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/elastic]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/elastic]

Relevant log output

No response

Relevant configuration

No response

Version

2.1.2

On which operating system are you observing this issue?

Linux

In which environment are you deploying?

Docker

Additional Context

No response

@juananinca juananinca added the bug Something is not working. label Aug 30, 2023
@svanellewee
Copy link

hey @juananinca did you get any updates on this outside of GH ?

@juananinca
Copy link
Author

juananinca commented Oct 9, 2023

hey @juananinca did you get any updates on this outside of GH ?

Hello @svanellewee, I created a ticket at datadog support but they couldn't give a solution.
Funny thing I noticed is when you're using the "otlp/elastic" exporter to a elk cluster, elastic recieves the span names without any problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something is not working.
Projects
None yet
Development

No branches or pull requests

2 participants