Summary
OpenAI chat completion spans can log omitted request parameters as raw OpenAI SDK sentinel objects, e.g.:
tools: <openai.Omit object at 0x1086797f0>
This happened on an OpenAI Chat Completion span from a LiveKit Agents integration call.
Observed behavior
The span metadata included:
{
"provider": "openai",
"model": "openai/gpt-5.2-chat-latest",
"stream": true,
"stream_options": {
"include_usage": true
},
"tools": "<openai.Omit object at 0x1086797f0>",
"timeout": "Timeout(timeout=10.0)"
}
Expected behavior
OpenAI SDK sentinel values that mean “omit this parameter” should not be logged into Braintrust span metadata. In this case, tools should likely be absent from metadata rather than stringified as an object repr.
Likely cause
braintrust.integrations.openai.tracing.ChatCompletionWrapper._parse_params() copies request kwargs into metadata via prettify_params(), which currently delegates to _prettify_response_params(..., drop_not_given=True).
The shared helper filters provider NotGiven sentinels by checking type name NotGiven, but it does not appear to filter OpenAI's newer/alternate Omit sentinel. LiveKit appears to pass tools=openai.Omit() for omitted tools, and that object leaks into metadata.
Relevant code paths:
py/src/braintrust/integrations/openai/tracing.py
py/src/braintrust/integrations/utils.py (_is_not_given, _prettify_response_params)
- Existing regression coverage around
NOT_GIVEN: py/src/braintrust/integrations/openai/test_openai.py
Suggested fix
Treat OpenAI Omit sentinel values like NotGiven when drop_not_given=True, and add regression coverage for chat completions with tools=openai.Omit().
A conservative implementation could update the sentinel detection helper to recognize both type names:
type(value).__name__ in {"NotGiven", "Omit"}
or otherwise detect the OpenAI SDK sentinel without adding a hard dependency/import.
Impact
This pollutes Braintrust traces with implementation-detail object reprs and can confuse users into thinking tools were configured or sent when the provider SDK actually omitted the parameter.
Summary
OpenAI chat completion spans can log omitted request parameters as raw OpenAI SDK sentinel objects, e.g.:
This happened on an OpenAI
Chat Completionspan from a LiveKit Agents integration call.Observed behavior
The span metadata included:
{ "provider": "openai", "model": "openai/gpt-5.2-chat-latest", "stream": true, "stream_options": { "include_usage": true }, "tools": "<openai.Omit object at 0x1086797f0>", "timeout": "Timeout(timeout=10.0)" }Expected behavior
OpenAI SDK sentinel values that mean “omit this parameter” should not be logged into Braintrust span metadata. In this case,
toolsshould likely be absent from metadata rather than stringified as an object repr.Likely cause
braintrust.integrations.openai.tracing.ChatCompletionWrapper._parse_params()copies request kwargs into metadata viaprettify_params(), which currently delegates to_prettify_response_params(..., drop_not_given=True).The shared helper filters provider
NotGivensentinels by checking type nameNotGiven, but it does not appear to filter OpenAI's newer/alternateOmitsentinel. LiveKit appears to passtools=openai.Omit()for omitted tools, and that object leaks into metadata.Relevant code paths:
py/src/braintrust/integrations/openai/tracing.pypy/src/braintrust/integrations/utils.py(_is_not_given,_prettify_response_params)NOT_GIVEN:py/src/braintrust/integrations/openai/test_openai.pySuggested fix
Treat OpenAI
Omitsentinel values likeNotGivenwhendrop_not_given=True, and add regression coverage for chat completions withtools=openai.Omit().A conservative implementation could update the sentinel detection helper to recognize both type names:
or otherwise detect the OpenAI SDK sentinel without adding a hard dependency/import.
Impact
This pollutes Braintrust traces with implementation-detail object reprs and can confuse users into thinking tools were configured or sent when the provider SDK actually omitted the parameter.