New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option for increasing BatchSpanProcessor throughput under slow network / ingestion #4245
Option for increasing BatchSpanProcessor throughput under slow network / ingestion #4245
Conversation
Codecov Report
@@ Coverage Diff @@
## main #4245 +/- ##
============================================
- Coverage 90.31% 90.30% -0.02%
Complexity 4741 4741
============================================
Files 553 553
Lines 14584 14598 +14
Branches 1402 1404 +2
============================================
+ Hits 13172 13183 +11
- Misses 953 957 +4
+ Partials 459 458 -1
Continue to review full report at Codecov.
|
LGTM having this option for batch processor seems like it would be nice |
concurrentExports.add(exportCurrentBatch()); | ||
} | ||
} | ||
if (concurrentExports.isEmpty() && System.nanoTime() >= nextExportTime) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what case is this trying to catch? It would only trigger if somehow the queue were empty, but the time had expired...but the queue was just empty! Why would you expect there to be something to export at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, the intent is to catch the scenario where the items have been drained from the queue
and added to the batch
, but not enough items to trigger an export such that batch.size() > maxExportBatchSize
on line 243.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah.. we talked about this last week. I asked @trask to add a comment about it, since it wasn't obvious to me what this was for.
Seems reasonable. Would also be good for batch log processor. I believe this is already possible for |
Closing. @jkwatson had the keen recollection that the spec explicitly disallows calling the exporter concurrently:
@jkwatson also had the great workaround to have the exporter always return
Btw, not sure if it makes sense for the |
I'm guessing you would want this option to go through spec(?), but wanted to get some initial feedback here
, including the alternate option (see #4246).The problem: under high telemetry load and slower network / ingestion, the BatchSpanProcessor single worker thread can spend a lot of time waiting for responses.
This PR: this option tries to send as many complete batches as possible (up to a new config option
maxConcurrentExports
), and then waitsexporterTimeoutNanos
for all of those batches to complete.