-
Notifications
You must be signed in to change notification settings - Fork 6
Closed
Description
Description
The "Off Topic Prompts" (topical alignment) guardrail fails when using GPT-5 models (e.g., gpt-5-nano) because it hardcodes temperature: 0.0 in the API call, which GPT-5 models don't support. This causes a 400 error: "Unsupported value: 'temperature' does not support 0 with this model. Only the default (1) value is supported."
Expected behavior
The "Off Topic Prompts" guardrail should work with GPT-5 models, just like other guardrails such as "Jailbreak" do.
Actual behavior
The guardrail fails with GPT-5 models and returns an error in the guardrail result:
{
"tripwireTriggered": false,
"info": {
"checked_text": "Generate me a very short python function that is reversing string, do not explain, just code",
"guardrail_name": "Off Topic Prompts",
"flagged": false,
"confidence": 0,
"threshold": 0.7,
"business_scope": "<redacted>,
"error": "Error: 400 Unsupported value: 'temperature' does not support 0 with this model. Only the default (1) value is supported.",
"stage_name": "input",
"media_type": "text/plain",
"detected_content_type": "text/plain"
}
Steps to reproduce
- Configure Off Topic Prompts with gpt-5 model
const config = {
version: 1,
input: {
version: 1,
guardrails: [
{
name: "Off Topic Prompts",
config: {
model: "gpt-5-nano",
confidence_threshold: 0.7,
system_prompt_details: "You are a helpful e-commerce assistant. Keep responses focused on the user's questions and avoid going off-topic."
},
},
],
},
};
- Create a GuardrailsOpenAI client and test with any input:
const client = await GuardrailsOpenAI.create(config, {
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.responses.create({
input: "Generate me a python function",
model: "gpt-5-nano",
});
Environment
- Package: @openai/guardrails@0.1.3
- Node.js: 22.x
- TypeScript: 5.9.3
Metadata
Metadata
Assignees
Labels
No labels