-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RUM-699 fix: RUM context not being linked to started span #1615
RUM-699 fix: RUM context not being linked to started span #1615
Conversation
8d089ce
to
271509b
Compare
Datadog ReportBranch report: ✅ 0 Failed, 2739 Passed, 0 Skipped, 12m 40.75s Wall Time 🔻 Code Coverage Decreases vs Default Branch (9)
|
271509b
to
1845541
Compare
// Still on context thread: send `Writer` to EWC caller. The writer implements `AsyncWriter`, so | ||
// the implementation of `writer.write(value:)` will run asynchronously without blocking the context thread. | ||
do { | ||
try block(context, writer) | ||
} catch { | ||
telemetry.error("Failed to execute feature scope", error: error) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is intentional and makes the block()
not throwing. Rationale is: the core being agnostic has no idea on how to handle or recover from the error that occurs in block passed to Feature (like RUM or SR). We had zero occurrence of "Failed to execute feature scope"
in telemetry as no existing implementation of this block actually throws.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! so glad to see the tracer queue removed 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good cleanup. Nice work.
1845541
to
947f8b8
Compare
7efade8
to
2ecfae1
Compare
What and why?
🐞 Fixes the bug of RUM information not being linked correctly on
tracer.startSpan()
.How?
The problem was introduced in V2 where we started sending RUM information in
DatadogContext
by propagating its updates on message bus. Because message bus adds one more asynchronous call, the information was not available in totracer
at the right time:Under the hood, the propagation model was this:
DatadogContext
(async on context queue)DatadogContext
update to all modules (async on message bus queue)DatadogContext
and store RUM info inMessageReceiver
MessageReceiver
It is now simplified and changed to:
DatadogContext
(async on context queue)DatadogContext
(async on context queue)DatadogContext
captured on starting it (async on context queue)Ultimately, RUM context is no longer passed to Trace through message bus and
MessageReceiver
. Instead, it is queried byDatadogTrace
with the new capability added toFeatureScope
:🎁 Part of the requirement to enable this work was removing
tracer.queue
and replacing it with@ReadWriteLock
to synchronise tracer and span states. According to my benchmark measures, there is no performance change coming from this PR (both inTracer
and inDatadogCore
).Review checklist
Custom CI job configuration (optional)
tools/