Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed Tracing CPU Load Optimization #505

Closed
iamemilio opened this issue May 31, 2022 · 1 comment · Fixed by #531 or newrelic/docs-website#8258
Closed

Distributed Tracing CPU Load Optimization #505

iamemilio opened this issue May 31, 2022 · 1 comment · Fixed by #531 or newrelic/docs-website#8258
Assignees
Labels
efficiency PM Project Management Attention Quality & UX Code Quality and User Experience

Comments

@iamemilio
Copy link
Contributor

Customers have been reporting 30% higher CPU when using distributed tracing. GTSE has pinpointed this issue to the when the GetMetadataFromContext() function calls CreateDistributedTracePayload(). Their profiling suggests that the problematic line is:

hdrs.Set(DistributedTraceNewRelicHeader, p.NRHTTPSafe())

https://github.com/newrelic/go-agent/blob/master/v3/newrelic/distributed_tracing.go#L139-L153

Is there a way that we can reduce the performance impact of this code? Is this impacted by the memory leak?

@iamemilio iamemilio added efficiency Quality & UX Code Quality and User Experience PM Project Management Attention labels May 31, 2022
@nr-swilloughby
Copy link
Contributor

I pushed a fix for this to the unreleased branch json_speed_issue_505. In my preliminary testing this code runs 2x the speed of the previous code. If you want, you can try running some testcases using the agent code in that branch and see how the performance compares to what you were seeing before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
efficiency PM Project Management Attention Quality & UX Code Quality and User Experience
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants