Skip to content

fix: reject non-object function tool input JSON#3166

Merged
seratch merged 1 commit intoopenai:mainfrom
ioleksiuk:fix/reject-non-object-function-tool-json
May 7, 2026
Merged

fix: reject non-object function tool input JSON#3166
seratch merged 1 commit intoopenai:mainfrom
ioleksiuk:fix/reject-non-object-function-tool-json

Conversation

@ioleksiuk
Copy link
Copy Markdown
Contributor

Summary

Mirror the MCP fix in #3135 for the function-tool path. When the model emits a JSON array or scalar (e.g. [1, 2, 3], "foo", null) instead of an object as tool arguments, **unpack into the pydantic params model raises a raw TypeError. That TypeError is not matched by _extract_tool_argument_json_error (which only matches ModelBehaviorError whose message starts with "Invalid JSON input for tool"), so the default_tool_error_function "please try again with valid JSON" retry path is bypassed and the run fails with a confusing error instead of asking the model to fix its arguments.

This validates the parsed JSON shape inside _parse_function_tool_json_input and raises a clean ModelBehaviorError for non-object input, matching the MCP precedent. Affects both the standard runtime tool path and the realtime tool path (both reach the same parser).

Test plan

  • New parametrized test test_function_tool_rejects_non_object_json_input covering JSON arrays, strings, numbers, null, and bool — each must raise ModelBehaviorError with "expected a JSON object".
  • uv run pytest tests/test_function_tool.py tests/test_run_step_execution.py tests/test_function_tool_decorator.py tests/test_agent_as_tool.py — 191 passed.
  • uv run ruff check src/agents/tool.py tests/test_function_tool.py — All checks passed.

Issue number

N/A — found while reviewing function-tool error handling consistency with #3135.

Checks

  • I've added new tests (if relevant)
  • I've added/updated the relevant documentation (no doc changes needed)
  • I've run make lint and make format
  • I've made sure tests pass

Mirror the MCP fix in openai#3135 for the function-tool path: when the model
emits a JSON array or scalar instead of an object, `**unpack` into the
pydantic params model raises a TypeError that is not matched by
`_extract_tool_argument_json_error`, so the ModelBehaviorError-based
retry path is bypassed. Validate the parsed shape and raise a
ModelBehaviorError with a clear message instead.
@github-actions github-actions Bot added bug Something isn't working feature:core labels May 7, 2026
Copy link
Copy Markdown
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; we can merge once Codex says 👍

@seratch seratch added this to the 0.16.x milestone May 7, 2026
@seratch
Copy link
Copy Markdown
Member

seratch commented May 7, 2026

@codex review now

@seratch
Copy link
Copy Markdown
Member

seratch commented May 7, 2026

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Already looking forward to the next diff.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@seratch seratch merged commit 3a11cf5 into openai:main May 7, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working feature:core

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants