New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(node): Simplify SentrySpanProcessor
#11273
Conversation
size-limit report 📦
|
const beforeSendTransaction = jest.fn(() => null); | ||
const transactions: Event[] = []; | ||
const beforeSendTransaction = jest.fn((event: Event) => { | ||
transactions.push(event); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: why do we have transactions
array here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes it a bit easier to test what is sent than to use hasBeenCalledWith
, as you don't have to use objectContains()
everywhere which gives weird diffs in jest for large objects 😬
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think we could drastically simplify things more but I really wanna see if this unblocks #11016 😄
EDIT: I just tried it out and it did not fix it 😢
const client = getClient(); | ||
client?.emit('spanEnd', span); | ||
client?.emit('spanEnd', span as Span); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m: Is this safe? If so, why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the same as it was before - this is safe because the actual trace code under the hood passes exactly the same class instance to onStart
and onEnd
, which satisifes the span interface. It's unfortunate, though, of course 😬
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do realize that it was like this before, but I now wonder whether we can depend on this. But nothing to duke out in this pr!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked the actual implementation in the tracer, and I think it's fine - not sure how we could be even safer here without checking the existence of each and every property we'd expect this to have 😬 which also feels overkill. But we can for sure revisit this!
Instead of extending `BatchSpanProcessor`, we handle this ourselves now.
@@ -24,7 +24,7 @@ connection.connect(function (err) { | |||
} | |||
}); | |||
|
|||
Sentry.startSpan( | |||
Sentry.startSpanManual( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL, this test only passed "on accident" before (but this shows we are being more "strict" now than we used to).
Instead of extending `BatchSpanProcessor`, we handle this ourselves now. There may be additional improvements we can do here, but this should be OK for now! I added two tests to cover what we really want to achieve/show here: 1. Ensure that if a span & child span are ended after each other, but in the same tick, they are correctly sent. 2. Ensure that if a child span is ended later, it is skipped. By skipping the batched span processor, we can simplify our flushing a bit and also get rid of some of the delays. For now I kept the exporter as a separate class (it is not exported anyhow, so purely internal), we can eventually look into merging this or not, but this is just a moving-parts-around exercise then. This way, tests continued to work mostly, which is good I'd say.
Instead of extending
BatchSpanProcessor
, we handle this ourselves now.There may be additional improvements we can do here, but this should be OK for now!
I added two tests to cover what we really want to achieve/show here:
By skipping the batched span processor, we can simplify our flushing a bit and also get rid of some of the delays.
For now I kept the exporter as a separate class (it is not exported anyhow, so purely internal), we can eventually look into merging this or not, but this is just a moving-parts-around exercise then. This way, tests continued to work mostly, which is good I'd say.