Summary
The LangChain4j streaming SSE listener (WrappedServerSentEventListener in WrappedHttpClient.java) only accumulates delta.content text from streaming chunks. delta.tool_calls is completely ignored. When a streaming LangChain4j OpenAI call returns tool calls, the span's output_json contains an empty content string and no tool_calls field — the tool call data is silently lost.
This is distinct from #60 (which covers missing non-OpenAI model providers) and #73 (which covers missing embedding model support). This issue affects the supported OpenAI streaming path specifically: tool_calls data is dropped even for the OpenAI provider that is properly instrumented for non-streaming and text-only streaming.
For comparison, the direct OpenAI SDK instrumentation (TracingHttpClient in openai_2_8_0) uses ChatCompletionAccumulator which captures all delta types including tool_calls.
What is missing
1. accumulateChunk() ignores tool_calls (lines 183–208)
private void accumulateChunk(String data) {
// ...
JsonNode chunk = BraintrustJsonMapper.get().readTree(data);
if (chunk.has("choices") && chunk.get("choices").size() > 0) {
JsonNode choice = chunk.get("choices").get(0);
if (choice.has("delta")) {
JsonNode delta = choice.get("delta");
if (delta.has("content")) {
contentBuffer.append(delta.get("content").asText());
}
// ← delta.tool_calls is never checked or accumulated
}
}
// ...
}
Only delta.content is captured. delta.tool_calls, delta.refusal, and any other delta fields are silently dropped.
2. finalizeSpan() cannot produce tool_calls (lines 210–234)
private void finalizeSpan() {
// ...
var message = BraintrustJsonMapper.get().createObjectNode();
message.put("role", "assistant");
message.put("content", contentBuffer.toString());
choice.set("message", message);
// ← no tool_calls field is ever set on the message
// ...
}
The reassembled message always has only role and content. There is no data structure to hold accumulated tool_calls, and no code path to include them in the output.
Impact
When a LangChain4j streaming call triggers tool use (common in agentic workflows using AiServices with @Tool methods):
- The span's
output_json shows {"choices": [{"message": {"role": "assistant", "content": ""}}]} — appearing as if the model returned nothing
- The
finish_reason is captured as "tool_calls", contradicting the empty output
- The actual tool call names, IDs, and arguments are permanently lost from the trace
- Non-streaming calls and the separate
TracingToolExecutor tool execution spans are unaffected — but the LLM span that triggered the tool call has no record of what it requested
Additional gaps in the same code
- Only
choices[0] is accumulated (line 191: chunk.get("choices").get(0)) — multi-choice streaming responses lose all but the first choice
delta.refusal (safety refusal text) is also dropped
Braintrust docs status
- Braintrust docs at https://www.braintrust.dev/docs/instrument/trace-llm-calls state "Streaming responses are fully supported — Braintrust automatically collects streamed chunks and logs the complete response as a single span": supported
- LangChain4j is not mentioned on the Braintrust integrations page: not_found
Upstream sources
Local files inspected
braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/WrappedHttpClient.java — lines 183–208 (accumulateChunk: only delta.content accumulated; delta.tool_calls ignored), lines 210–234 (finalizeSpan: message has only role and content; no tool_calls field)
braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/TracingToolExecutor.java — separate tool execution tracing (works correctly, but is downstream of the broken LLM span)
braintrust-sdk/instrumentation/openai_2_8_0/src/main/java/dev/braintrust/instrumentation/openai/v2_8_0/TracingHttpClient.java — direct SDK path uses ChatCompletionAccumulator which handles tool_calls correctly
braintrust-sdk/instrumentation/langchain_1_8_0/src/test/java/dev/braintrust/instrumentation/langchain/v1_8_0/BraintrustLangchainTest.java — no streaming test exercises tool_calls responses
Summary
The LangChain4j streaming SSE listener (
WrappedServerSentEventListenerinWrappedHttpClient.java) only accumulatesdelta.contenttext from streaming chunks.delta.tool_callsis completely ignored. When a streaming LangChain4j OpenAI call returns tool calls, the span'soutput_jsoncontains an emptycontentstring and notool_callsfield — the tool call data is silently lost.This is distinct from #60 (which covers missing non-OpenAI model providers) and #73 (which covers missing embedding model support). This issue affects the supported OpenAI streaming path specifically: tool_calls data is dropped even for the OpenAI provider that is properly instrumented for non-streaming and text-only streaming.
For comparison, the direct OpenAI SDK instrumentation (
TracingHttpClientinopenai_2_8_0) usesChatCompletionAccumulatorwhich captures all delta types including tool_calls.What is missing
1.
accumulateChunk()ignores tool_calls (lines 183–208)Only
delta.contentis captured.delta.tool_calls,delta.refusal, and any other delta fields are silently dropped.2.
finalizeSpan()cannot produce tool_calls (lines 210–234)The reassembled message always has only
roleandcontent. There is no data structure to hold accumulated tool_calls, and no code path to include them in the output.Impact
When a LangChain4j streaming call triggers tool use (common in agentic workflows using
AiServiceswith@Toolmethods):output_jsonshows{"choices": [{"message": {"role": "assistant", "content": ""}}]}— appearing as if the model returned nothingfinish_reasonis captured as"tool_calls", contradicting the empty outputTracingToolExecutortool execution spans are unaffected — but the LLM span that triggered the tool call has no record of what it requestedAdditional gaps in the same code
choices[0]is accumulated (line 191:chunk.get("choices").get(0)) — multi-choice streaming responses lose all but the first choicedelta.refusal(safety refusal text) is also droppedBraintrust docs status
Upstream sources
delta.tool_callscarries tool call data incrementally in streaming mode, usingindexfor merge identificationAiServicesand@ToolannotationsOpenAiStreamingChatModelsupports tool_calls in streaming responsesLocal files inspected
braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/WrappedHttpClient.java— lines 183–208 (accumulateChunk: onlydelta.contentaccumulated;delta.tool_callsignored), lines 210–234 (finalizeSpan: message has onlyroleandcontent; notool_callsfield)braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/TracingToolExecutor.java— separate tool execution tracing (works correctly, but is downstream of the broken LLM span)braintrust-sdk/instrumentation/openai_2_8_0/src/main/java/dev/braintrust/instrumentation/openai/v2_8_0/TracingHttpClient.java— direct SDK path usesChatCompletionAccumulatorwhich handles tool_calls correctlybraintrust-sdk/instrumentation/langchain_1_8_0/src/test/java/dev/braintrust/instrumentation/langchain/v1_8_0/BraintrustLangchainTest.java— no streaming test exercises tool_calls responses