-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Datadog traces end up in a Datadog account that doesn't match the api key used for the traces exporter #18233
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Just a note here about the problem that I reported, I am only facing it in collector version 0.70.0 and version 0.67.0 is fine. I am not sure if this is Datadog exporter issue or issue of the collector itself failing to pick up the right Datadog exporter in the pipeline |
So this is probably related to this fe8cc1a The config is a gigantic mess. So for when it creates the trace exporter, it does string(cfg.Api.Key) where cfg.Api.Key is a configopaque.string, over all the API Key is miss handled and used in ways that are not clear at all. These configs NEED to be vastly simplified, I see three different structs that could be a config, plus their children structs, also this makes the imports insane. While this is a stab in the dark, that PR is really the only change in a long time and its related to the key. Take a look code and try to navigate through it or make sense of it. It's obvious, time for a Refactor! |
Check out https://github.com/open-telemetry/opentelemetry-collector/blob/v0.70.0/config/configopaque/opaque.go It doesnt even store the string, it just hijacks a Marshalling Function. |
So, I also noticed the factory uses sync.Once on a bunch of functions, SharedComponents should be adopted, the impl I have here caches each receiver by Config and only lets the Start and Stop functions exec onces, this behavior is great for receivers and exporters. I know its another stab in the dark, but its a good practice anyways Here is my impl on the datadogreceiver with the cache by config func createTracesReceiver(ctx context.Context, params receiver.CreateSettings, cfg component.Config, consumer consumer.Traces) (r receiver.Traces, err error) {
rcfg := cfg.(*Config)
r = receivers.GetOrAdd(cfg, func() component.Component {
dd, _ := newDataDogReceiver(rcfg, consumer, params)
return dd
})
return r, nil
}
var receivers = sharedcomponent.NewSharedComponents() |
Hi @hamidp555, Thanks for reporting this. I did manage to reproduce the issue. The problem is that some of the processing code uses the API key and it is (unintentionally) shared between exporters, so the traces end up in an unexpected account. I have a PR up and will get it reviewed soon. Hi @boostchicken! Thank you for sharing your views on the state of the repository. I am more than happy and open to improve our code, and the quality of it is very important to me. I am considering your suggestion regarding shared components and I'm also happy to improve our config package based on constructive feedback. However, I'd like to treat these two suggestions as separate issues. If you'd like to open an issue and propose your improvement, I will definitely consider and apply it where it makes sense. The one thing I'd like to ask is to provide more specific feedback because "a gigantic mess" is not really actionable. |
Makes sense to me man and sorry about being a little generic on the "gigantic mess there" I will enumerate where things are confusing or kinda of out of hand and make tasks for them. Keep you posted! |
Component(s)
exporter/datadog
What happened?
Description
I have a pipeline that consists of three Datadog exporters, logs exporter, metrics exporter and traces exporter. logs and metrics exporters use a different api key than the traces exporter. However, the traces sometimes end up in the Datadog account that is related to the api key used for the logs and metrics exporters.
Steps to Reproduce
Have a collector with the example configuration provided in this bug report, produce some traces and check where traces end up, restart the collector and check again to see where traces end up. repeat this a few times. sometimes traces will end up in a Datadog account related to the metrics/logs api key
Expected Result
My expectation was that traces always end up in a Datadog account that is related to the api key used for traces exporter
Actual Result
Collector version
0.70.0
Environment information
Environment
OS: Ubuntu 20.04.5 LTS
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: