Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JAEGER_REPORTER_MAX_QUEUE_SIZE is not the max queue size #607

jpkrohling opened this issue Mar 21, 2019 · 0 comments

JAEGER_REPORTER_MAX_QUEUE_SIZE is not the max queue size #607

jpkrohling opened this issue Mar 21, 2019 · 0 comments


Copy link

@jpkrohling jpkrohling commented Mar 21, 2019

I'm making a tuning guide for Jaeger and, checking the code for the JAEGER_REPORTER_MAX_QUEUE_SIZE setting I realized that this does not represent the number of spans in memory to be kept before they are sent out, as the documentation makes me believe:

defines the max size of the in-memory buffer used to keep spans before they are sent out.

Rather, it's a command queue size, where each span is added to an AppendCommand. This command calls the Sender#append() method which in most cases will just add the span to the Sender's own buffer. If the max packet size for the sender is reached, the #append() command will block until the packet is sent. In most cases, though, a periodic timer will call #flush(), which generates a FlushCommand and puts in the same queue as the AppendCommand.

This means that there might be far more than 100 spans in memory, as there might be, say, 50 FlushCommand being executed waiting for the server to return (feasible in case of HttpSender, not so much for UdpSender). Given that the internal HttpSender buffer defaults to 1 MB, this scenario presents 50 MB worth of spans.

I'm not sure there's an action item about this: if we have long term plans for the Java Client, we might consider rewriting the whole reporter/sender part when we add gRPC support. In any case, I thought it's good to have this documented somewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant