fix: guard None text in text_message_output and add output guardrail count to RunErrorDetails#3375
Merged
seratch merged 1 commit intoMay 12, 2026
Conversation
…ardrail count to RunErrorDetails pretty-print - ItemHelpers.text_message_output: apply the same `or ""` guard that extract_text already uses. Provider gateways (e.g. LiteLLM) and model_construct paths during streaming can surface None for ResponseOutputText.text; without the guard the concatenation raises TypeError. - pretty_print_run_error_details: add the missing output_guardrail_results line so RunErrorDetails.__str__ is consistent with pretty_print_result and pretty_print_run_result_streaming, which both report both guardrail counts. - Add tests/utils/test_pretty_print_and_items.py covering both fixes. - Update the existing inline snapshot in tests/test_pretty_print.py.
seratch
approved these changes
May 12, 2026
4 tasks
ioleksiuk
added a commit
to ioleksiuk/openai-agents-python
that referenced
this pull request
May 14, 2026
…_text Extends the fix in openai#3375 (`fix: guard None text in text_message_output ...`) to the two sibling helpers in `ItemHelpers` that had the same shape. `ResponseOutputText.text` is typed as `str` per the Responses API schema, but `items.py:714-720` already documents that provider gateways (e.g. LiteLLM) and `model_construct` paths during streaming surface `None` values. PR openai#3375 fixed `text_message_output` for this case, but `extract_last_content` (declared `-> str`, line 678) still returned `None` when the underlying `text` was `None`, silently violating its type contract. `extract_last_text` (declared `-> str | None`) returned `None` only by virtue of the falsy passthrough — make that explicit so callers can rely on truthiness semantics. Repro for `extract_last_content`: >>> text_part = ResponseOutputText.model_construct( ... text=None, type="output_text", annotations=[]) >>> msg = ResponseOutputMessage(id="m", role="assistant", ... status="completed", type="message", content=[text_part]) >>> ItemHelpers.extract_last_content(msg) None # but the function is typed `-> str`
ioleksiuk
added a commit
to ioleksiuk/openai-agents-python
that referenced
this pull request
May 14, 2026
Extends the fix in openai#3375 (`fix: guard None text in text_message_output ...`) to one more sibling helper that had the same `-> str` type contract. `ResponseOutputText.text` is typed as `str` per the Responses API schema, but `src/agents/items.py:714-720` already documents that provider gateways (e.g. LiteLLM) and `model_construct` paths during streaming surface `None` values. PR openai#3375 fixed `text_message_output` for this case. `extract_last_content` (declared `-> str`, items.py:678) silently violated its type contract by returning `None` when `last_content.text` was `None`. Repro: >>> text_part = ResponseOutputText.model_construct( ... text=None, type="output_text", annotations=[]) >>> msg = ResponseOutputMessage.model_construct( ... id="m", role="assistant", status="completed", ... type="message", content=[text_part]) >>> ItemHelpers.extract_last_content(msg) None # but the function is typed `-> str` Fix mirrors the openai#3375 pattern: `return last_content.text or ""`. Note: `extract_last_text` (declared `-> str | None`) is intentionally left alone — `None` is already a valid return per its signature, and coalescing `""` → `None` would silently change semantics for tools that legitimately return empty text.
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Résumé
Two small, independent defects in the same area — both are cases where a sibling code path was already hardened but the fix was not applied consistently.
1.
ItemHelpers.text_message_outputcrashes onNonetextextract_text(line 706) already appliescontent_item.text or ""and carries a comment explaining why: provider gateways (e.g. LiteLLM) andmodel_constructpaths during streaming can surfaceNoneforResponseOutputText.text. The adjacenttext_message_output(line 762) was missing the same guard, so the sameTypeError: can only concatenate str (not "NoneType") to strcould still be triggered throughtext_message_outputsand theagent.pycall site.Fix: apply
item.text or ""intext_message_output, matchingextract_text.2.
pretty_print_run_error_detailsomitsoutput_guardrail_resultspretty_print_resultandpretty_print_run_result_streamingboth report both guardrail counts.pretty_print_run_error_detailsonly reportedinput_guardrail_results, soRunErrorDetails.__str__()silently dropped the output guardrail line.Fix: add the missing
output_guardrail_resultsline.Plan de test
tests/utils/test_pretty_print_and_items.pywith 5 focused tests covering both fixes (including theNone-text regression and the guardrail count assertion).tests/test_pretty_print.pyto include the new output guardrail line.make format && make lint && make typecheck && make tests— all pass.Issue number
N/A (no prior issue filed for these two defects)
Checks
make lintandmake format