Skip to content

Conversation

@ipince
Copy link
Contributor

@ipince ipince commented Nov 13, 2025

Summary

Sometimes the model does return an empty reasoning block... Let's let it through instead of failing to decode.

Example:
https://platform.openai.com/logs/resp_02474882b463981300691523a7eb9481938b7823951cfe62ab

I'll add the raw response in a comment to this PR

How was it tested?

Ran testpilot locally.

Community Contribution License

All community contributions in this pull request are licensed to the project maintainers under the terms of the Apache 2 License.

By creating this pull request I represent that I have the right to license the contributions to the project maintainers under the Apache 2 License as stated in the Community Contribution License.

@ipince ipince requested a review from loreto November 13, 2025 00:31
@ipince
Copy link
Contributor Author

ipince commented Nov 13, 2025

Raw response:

{
  "id": "resp_02474882b463981300691523a7eb9481938b7823951cfe62ab",
  "created_at": 1762993064,
  "error": {
    "code": "",
    "message": ""
  },
  "incomplete_details": {
    "reason": ""
  },
  "instructions": {
    "OfString": "",
    "OfInputItemList": null
  },
  "metadata": {},
  "model": "computer-use-preview-2025-03-11",
  "object": "response",
  "output": [
    {
      "id": "rs_02474882b463981300691523ab77188193a1bb609492ec22f8",
      "content": null,
      "role": "assistant",
      "status": "",
      "type": "reasoning",
      "queries": null,
      "results": null,
      "arguments": "",
      "call_id": "",
      "name": "",
      "action": {
        "query": "",
        "type": "",
        "url": "",
        "pattern": "",
        "button": "",
        "x": 0,
        "y": 0,
        "path": null,
        "keys": null,
        "scroll_x": 0,
        "scroll_y": 0,
        "text": "",
        "command": null,
        "env": null,
        "timeout_ms": 0,
        "user": "",
        "working_directory": ""
      },
      "pending_safety_checks": null,
      "summary": [],
      "encrypted_content": "",
      "result": "",
      "code": "",
      "container_id": "",
      "outputs": null,
      "server_label": "",
      "error": "",
      "output": "",
      "tools": null,
      "input": ""
    },
    {
      "id": "cu_02474882b463981300691523ad44908193832a08747e8a0e82",
      "content": null,
      "role": "assistant",
      "status": "completed",
      "type": "computer_call",
      "queries": null,
      "results": null,
      "arguments": "",
      "call_id": "call_C5m1lkP4bdOkEqEmXdTi2tJY",
      "name": "",
      "action": {
        "query": "",
        "type": "click",
        "url": "",
        "pattern": "",
        "button": "left",
        "x": 632,
        "y": 166,
        "path": null,
        "keys": null,
        "scroll_x": 0,
        "scroll_y": 0,
        "text": "",
        "command": null,
        "env": null,
        "timeout_ms": 0,
        "user": "",
        "working_directory": ""
      },
      "pending_safety_checks": [],
      "summary": null,
      "encrypted_content": "",
      "result": "",
      "code": "",
      "container_id": "",
      "outputs": null,
      "server_label": "",
      "error": "",
      "output": "",
      "tools": null,
      "input": ""
    }
  ],
  "reasoning": {
    "effort": "medium",
    "generate_summary": "",
    "summary": ""
  },
(rest omitted for brevity)

Copy link
Contributor

@loreto loreto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Although I'm debating whether it would be better to skip the reasoning block entirely. I don't know if it's better to be 100% accurate in terms of what the underlying provider returns or whether it's better to clean things up and make them easier to use for the upstream user. It would suck to try to render these blocks, and they're all empty.

There's also a world in which we leave the providers as accurate as possible, but then the AI SDK framework itself cleans things up before handing back to the user.

@ipince
Copy link
Contributor Author

ipince commented Nov 13, 2025

Yeah... I think removing them and adding a warning would be a legitimate approach too. But then I wonder if the LLM would ever return a single block, and it's empty.. that would never happen, right? right? lol

@ipince
Copy link
Contributor Author

ipince commented Nov 13, 2025

I'll leave as-is for now, and will revisit later if needed

@ipince ipince merged commit b6fb7db into main Nov 13, 2025
13 checks passed
@ipince ipince deleted the rodrigo/decode-reasoning branch November 13, 2025 20:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants