Please read this first
- Have you read the docs? Yes. The reusable prompt docs describe the
prompt parameter for the Responses API.
- Have you searched for related issues? Yes.
Describe the bug
OpenAIChatCompletionsModel.get_response() and stream_response() accept a Responses reusable prompt, pass it through to _fetch_response(), and then silently drop it when building chat.completions.create() kwargs.
This makes prompt-managed configuration look accepted even though the Chat Completions backend never sends it to the provider. OpenAIResponsesModel does include prompt in its request path, so this is specific to the Chat Completions backend.
Related paths checked:
Debug information
- Agents SDK version:
main at 683b6e79 (latest release tag checked locally: v0.17.0)
- Python version: Python 3.12.1
Repro steps
Run this script against current main:
import asyncio
from typing import Any
import httpx
from openai.types.chat.chat_completion import ChatCompletion, Choice
from openai.types.chat.chat_completion_message import ChatCompletionMessage
from agents import ModelSettings, ModelTracing, OpenAIChatCompletionsModel
class DummyCompletions:
def __init__(self) -> None:
self.kwargs: dict[str, Any] = {}
async def create(self, **kwargs: Any) -> ChatCompletion:
self.kwargs = kwargs
return ChatCompletion(
id="resp-id",
created=0,
model="fake",
object="chat.completion",
choices=[
Choice(
index=0,
finish_reason="stop",
message=ChatCompletionMessage(role="assistant", content="ok"),
)
],
)
class DummyClient:
def __init__(self, completions: DummyCompletions) -> None:
self.chat = type("_Chat", (), {"completions": completions})()
self.base_url = httpx.URL("https://api.openai.com/v1/")
async def main() -> None:
completions = DummyCompletions()
model = OpenAIChatCompletionsModel(
model="gpt-4",
openai_client=DummyClient(completions), # type: ignore[arg-type]
)
await model.get_response(
system_instructions=None,
input="hello",
model_settings=ModelSettings(),
tools=[],
output_schema=None,
handoffs=[],
tracing=ModelTracing.DISABLED,
previous_response_id=None,
conversation_id=None,
prompt={"id": "pmpt_123"}, # type: ignore[arg-type]
)
print("prompt" in completions.kwargs)
print([key for key in completions.kwargs if "prompt" in key])
asyncio.run(main())
Actual output:
Expected behavior
The Chat Completions backend should not silently accept and drop a reusable prompt. Since reusable prompts are a Responses API feature, Chat Completions should fail fast with a clear UserError, or otherwise explicitly map/send supported prompt fields if that behavior is intended.
Please read this first
promptparameter for the Responses API.Describe the bug
OpenAIChatCompletionsModel.get_response()andstream_response()accept a Responses reusableprompt, pass it through to_fetch_response(), and then silently drop it when buildingchat.completions.create()kwargs.This makes prompt-managed configuration look accepted even though the Chat Completions backend never sends it to the provider.
OpenAIResponsesModeldoes includepromptin its request path, so this is specific to the Chat Completions backend.Related paths checked:
prompt.prompt=Noneis the normal path and should stay unchanged.OpenAIResponsesModelalready sendspromptexplicitly.Debug information
mainat683b6e79(latest release tag checked locally:v0.17.0)Repro steps
Run this script against current
main:Actual output:
Expected behavior
The Chat Completions backend should not silently accept and drop a reusable prompt. Since reusable prompts are a Responses API feature, Chat Completions should fail fast with a clear
UserError, or otherwise explicitly map/send supported prompt fields if that behavior is intended.