Skip to content

max_output_tokens not supported by responses api #103

@himalalps

Description

@himalalps

Thanks for the Responses API update.

I encountered an issue when using the Responses API with max_output_tokens. After checking the source code, it looks like there is no special handling for this parameter, and forwarding it causes an upstream error.

Does this mean max_output_tokens is currently unsupported on the OpenAI side for Codex subscription?

If that's expected, it may be helpful to document this explicitly or ignore/filter the parameter before sending the request upstream.

$ curl http://localhost:8000/v1/responses \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4-mini",
    "input": "你好,介绍一下你自己",
    "max_output_tokens": 512
  }'
{"detail":"Unsupported parameter: max_output_tokens"}

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions