New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: first iteration on datadog tracing with spans #374
Conversation
f682df6
to
c53bf7f
Compare
Codecov Report
@@ Coverage Diff @@
## main #374 +/- ##
==========================================
- Coverage 28.83% 28.24% -0.60%
==========================================
Files 66 68 +2
Lines 7001 7144 +143
==========================================
- Hits 2019 2018 -1
- Misses 4754 4899 +145
+ Partials 228 227 -1
Continue to review full report at Codecov.
|
c53bf7f
to
2233ae5
Compare
2233ae5
to
b09eac4
Compare
b09eac4
to
82eab19
Compare
82eab19
to
badc2c7
Compare
6ce0919
to
b4dca20
Compare
b4dca20
to
f404eef
Compare
server/midddleware/middleware.go
Outdated
validatorStreamServerInterceptor(), | ||
} | ||
|
||
if config.DatadogTrace.Enabled { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we put this logic into the Interceptor and just pass out required components (Logger etc. Config is global so no need to pass that down)? Am I missing the benefit of having these interceptors exposed at this level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initializing the interceptor has a very visible cost according to the profiler. I think it's better only to initialize them when they are needed. What do you think?
} | ||
|
||
if config.DatadogTrace.Enabled { | ||
streamInterceptors = append(streamInterceptors, ddTraceStream()) | ||
} | ||
|
||
if config.Metrics.Grpc.Enabled && config.Metrics.Grpc.ResponseTime { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's our goal with the "basic" GRPC interceptor? Do we still need it? Or is this in place to make the transition smoother?
} | ||
|
||
if config.DatadogTrace.Enabled { | ||
unaryInterceptors = append(unaryInterceptors, ddTraceUnary()) | ||
} | ||
|
||
if config.Metrics.Grpc.Enabled && config.Metrics.Grpc.ResponseTime { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should put these Enabled flags under a single config structure and just case switch on them. WDYT?
@@ -118,6 +120,10 @@ func Get(config *config.Config, tenantMgr *metadata.TenantManager, txMgr *transa | |||
} | |||
|
|||
unaryInterceptors = append(unaryInterceptors, []grpc.UnaryServerInterceptor{ | |||
//grpctrace.UnaryServerInterceptor(grpctrace.WithServiceName(util.Service)), | |||
grpc_logging.UnaryServerInterceptor(grpc_zerolog.InterceptorLogger(sampledTaggedLogger)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we add a TODO to hook this up to the span IDs? Perhaps perform the logging in the code that manages the high level request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not something I changed, I just moved it in the initialization order. This and logs attached to spans are different.
40f4495
to
3badac9
Compare
store/kv/kv.go
Outdated
@@ -102,6 +103,17 @@ func measureLow(ctx context.Context, name string, f func() error) { | |||
metrics.FdbRequests.Tagged(tags).Histogram("histogram", tally.DefaultBuckets) | |||
defer metrics.FdbRequests.Tagged(tags).Histogram("histogram", tally.DefaultBuckets).Start().Stop() | |||
} | |||
if config.DefaultConfig.Tracing.Enabled { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would the following work?
closer, err := setTrace(ctx, name)
defer closer()
Above assumes no defer needs to be passed up but if you think you need the span later after L117 we can bring it back for Finish() or other things.
The config will be available as it's global so no need to pass that.
func setTrace(ctx context.Context, name string) (func f(), error) {
if !config.DefaultConfig.Tracing.Enabled {
return f(){}, nil
}
parentSpan, exists := tracer.SpenFromContext(ctx)
if exists {
...
}
return f(){}, nil
}
Btw I don't see tags being used but if they are being used elsewhere we could pass those down also.
store/search/ts.go
Outdated
metrics.SearchRequests.Tagged(tags).Histogram("histogram", tally.DefaultBuckets) | ||
defer metrics.SearchRequests.Tagged(tags).Histogram("histogram", tally.DefaultBuckets).Start().Stop() | ||
} | ||
if config.DefaultConfig.Tracing.Enabled { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we can do the same here as in kv:
span, err := setTrace(ctx, name)
Maybe tags will need to be added to the signature of setTrace() or whatever the name of the helper function would be.
d34300f
to
0fc28e8
Compare
0fc28e8
to
2cb3c25
Compare
🎉 This PR is included in version 1.0.0-alpha.25 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0-beta.1 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
No description provided.