-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to temporary disable specific plugin #1763
Comments
From the snippet above, it looks like the tracer may not be used as intended. Calls to For example, I would expect to see something like this instead: datadog.jsimport { tracer } from "dd-trace";
tracer
.init()
.use("aws-sdk", false)
.use("http", {
blocklist: [/.*dynamodb\..*\.amazonaws\.com.*/],
middleware: false,
});
export { tracer }; server.jsimport { tracer as DataDogTracer } from "./datadog"; // different file to avoid hoisting
async function someFunction () {
await tracer
.trace("handle-job", { tags: { name: jobAttributes.name } }, () =>
handleJob(jobAttributes)
);
} Most of the time, when a plugin configuration doesn't work it's because it was configured too late and not at the beginning of the process. |
Hmm... The tracer is initialised at the top of the process. But, it is initialised with the
This is exactly the problem that it is impossible to disable some plugins later in the code. We just need to disable |
Oh ok I misunderstood and thought it was for the entire process. This is not currently possible and is something we are currently working on but is unlikely to land before at least a few months. However, tracing shouldn't cause any memory leak regardless of the number of spans created as long as the trace finish in a timely manner. Can you provide more details about the specific problem these spans are causing? |
As a part of business logic we have a service that perform a longtime operations on multiple objects. That includes download source file, parsing one and data matching modules. The two last modules perform operations on each single data item that include PutItem, GetItem from DynamoDB, making request to a third-party service for each single data item, UpdateItem in DynamoDB. Amount of data might be up to 150K items and the operation might take around 6 and more hours. In the end we have a large amount of tracing data that has not pushed to datadog yet, therefore it is persists in memory and at some point crashes the service with |
Ok, in this case it's definitely a bug. We're supposed to have a maximum number of spans before a flush occurs automatically to avoid this, which doesn't seem to happen for your use case. Assuming this worked properly and traces would be sent in many smaller chunks without significantly affecting memory usage, would it be fine if these spans were sent? |
👍 I think it's a good idea if these ranges end up grouped into parent ranges. Or, how would that look in the user interface? |
It would still be a single trace in the UI even if sent in chunks since there would still be a single root operation, just a very large one. |
@dobeerman is this still an issue for you? |
I'll close this for now but we can reopen if it turns out to still be an issue. |
Describe the bug
The tracer is initialised at the beginning of the process. With a couple of plugins overrides.
For some longtime processes we need to disable some of plugins either completely or partially (
aws-sdk
plugin ordynamodb
only), due to it collects a large amount of data and it leads to a memory leaks. (we useaws-sdk@2.521.0
version)So, we do the following:
Unfortunately, we still receive a lot of related traces:
which includes tons of
aws.request PutItem
,http.request POST
,tcp.connect
anddns.lookup
to DynamoDB.Q: What we do wrong? 🤔
Environment node:12-alpine3.11
Tracer version: ^1.1.2
The text was updated successfully, but these errors were encountered: