Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Datadog traces end up in a Datadog account that doesn't match the api key used for the traces exporter #18233

Closed
hamidp555 opened this issue Feb 1, 2023 · 7 comments · Fixed by #18248
Labels
bug Something isn't working data:traces Trace related issues exporter/datadog Datadog components priority:p1 High

Comments

@hamidp555
Copy link

hamidp555 commented Feb 1, 2023

Component(s)

exporter/datadog

What happened?

Description

I have a pipeline that consists of three Datadog exporters, logs exporter, metrics exporter and traces exporter. logs and metrics exporters use a different api key than the traces exporter. However, the traces sometimes end up in the Datadog account that is related to the api key used for the logs and metrics exporters.

Steps to Reproduce

Have a collector with the example configuration provided in this bug report, produce some traces and check where traces end up, restart the collector and check again to see where traces end up. repeat this a few times. sometimes traces will end up in a Datadog account related to the metrics/logs api key

Expected Result

My expectation was that traces always end up in a Datadog account that is related to the api key used for traces exporter

Actual Result

Collector version

0.70.0

Environment information

Environment

OS: Ubuntu 20.04.5 LTS

OpenTelemetry Collector configuration

exporters:
  datadog/traces:
    api:
      key:<api_key_1>
      site: datadoghq.com
    sending_queue:
      enabled: false
  datadog/logs:
    api:
      key: <api_key_2>
      site: datadoghq.com
  datadog/metrics:
    api:
      key: <api_key_2>
      site: datadoghq.com

service:
  extensions:
  - health_check
  - memory_ballast
  pipelines:
    logs:
      exporters:
      - datadog/logs
      processors:
      - memory_limiter
      - batch
      receivers:
      - filelog
    metrics:
      exporters:
      - datadog/metrics
      processors:
      - memory_limiter
      - batch
      receivers:
      - prometheus
    traces:
      exporters:
      - datadog/traces
      processors:
      - memory_limiter
      receivers:
      - solace

Log output

No response

Additional context

No response

@hamidp555 hamidp555 added bug Something isn't working needs triage New item requiring triage labels Feb 1, 2023
@github-actions github-actions bot added the exporter/datadog Datadog components label Feb 1, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Feb 1, 2023

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@hamidp555 hamidp555 changed the title Datadog Datadog traces end up in a Datadog account that doesn't match the api key used for the traces exporter Feb 1, 2023
@mx-psi mx-psi added data:traces Trace related issues priority:p1 High and removed needs triage New item requiring triage labels Feb 1, 2023
@hamidp555
Copy link
Author

hamidp555 commented Feb 1, 2023

Just a note here about the problem that I reported, I am only facing it in collector version 0.70.0 and version 0.67.0 is fine. I am not sure if this is Datadog exporter issue or issue of the collector itself failing to pick up the right Datadog exporter in the pipeline

@boostchicken
Copy link
Member

So this is probably related to this fe8cc1a

The config is a gigantic mess. So for when it creates the trace exporter, it does string(cfg.Api.Key) where cfg.Api.Key is a configopaque.string, over all the API Key is miss handled and used in ways that are not clear at all.

These configs NEED to be vastly simplified, I see three different structs that could be a config, plus their children structs, also this makes the imports insane.

While this is a stab in the dark, that PR is really the only change in a long time and its related to the key. Take a look code and try to navigate through it or make sense of it. It's obvious, time for a Refactor!

@boostchicken
Copy link
Member

Check out https://github.com/open-telemetry/opentelemetry-collector/blob/v0.70.0/config/configopaque/opaque.go

It doesnt even store the string, it just hijacks a Marshalling Function.

@boostchicken
Copy link
Member

boostchicken commented Feb 2, 2023

So, I also noticed the factory uses sync.Once on a bunch of functions, SharedComponents should be adopted, the impl I have here caches each receiver by Config and only lets the Start and Stop functions exec onces, this behavior is great for receivers and exporters. I know its another stab in the dark, but its a good practice anyways

Here is my impl on the datadogreceiver with the cache by config

#18229

func createTracesReceiver(ctx context.Context, params receiver.CreateSettings, cfg component.Config, consumer consumer.Traces) (r receiver.Traces, err error) {
	rcfg := cfg.(*Config)
	r = receivers.GetOrAdd(cfg, func() component.Component {
		dd, _ := newDataDogReceiver(rcfg, consumer, params)
		return dd
	})
	return r, nil
}

var receivers = sharedcomponent.NewSharedComponents()

@gbbr
Copy link
Member

gbbr commented Feb 3, 2023

Hi @hamidp555,

Thanks for reporting this. I did manage to reproduce the issue. The problem is that some of the processing code uses the API key and it is (unintentionally) shared between exporters, so the traces end up in an unexpected account. I have a PR up and will get it reviewed soon.

Hi @boostchicken!

Thank you for sharing your views on the state of the repository. I am more than happy and open to improve our code, and the quality of it is very important to me. I am considering your suggestion regarding shared components and I'm also happy to improve our config package based on constructive feedback. However, I'd like to treat these two suggestions as separate issues. If you'd like to open an issue and propose your improvement, I will definitely consider and apply it where it makes sense. The one thing I'd like to ask is to provide more specific feedback because "a gigantic mess" is not really actionable.

@mx-psi mx-psi linked a pull request Feb 3, 2023 that will close this issue
@boostchicken
Copy link
Member

Hi @hamidp555,

Thanks for reporting this. I did manage to reproduce the issue. The problem is that some of the processing code uses the API key and it is (unintentionally) shared between exporters, so the traces end up in an unexpected account. I have a PR up and will get it reviewed soon.

Hi @boostchicken!

Thank you for sharing your views on the state of the repository. I am more than happy and open to improve our code, and the quality of it is very important to me. I am considering your suggestion regarding shared components and I'm also happy to improve our config package based on constructive feedback. However, I'd like to treat these two suggestions as separate issues. If you'd like to open an issue and propose your improvement, I will definitely consider and apply it where it makes sense. The one thing I'd like to ask is to provide more specific feedback because "a gigantic mess" is not really actionable.

Makes sense to me man and sorry about being a little generic on the "gigantic mess there" I will enumerate where things are confusing or kinda of out of hand and make tasks for them. Keep you posted!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working data:traces Trace related issues exporter/datadog Datadog components priority:p1 High
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants