-
Notifications
You must be signed in to change notification settings - Fork 374
Add a quick opt-in option to switch to gpt-5 and fix issues around gpt-5 #363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: 04a1b30 The changes in this PR will be included in the next version bump. This PR includes changesets to release 5 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comments for reviewers
'@openai/agents-openai': patch | ||
--- | ||
|
||
Fix a bug where responses api does not accept both outputType and verbosity parameter for gpt-5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as mentioned in this PR's description, this is part of #362; so merge the PR first
name: 'MCP Assistant', | ||
instructions: 'You must always use the MCP tools to answer questions.', | ||
instructions: | ||
'You must always use the MCP tools to answer questions. The mcp server knows which repo to investigate, so you do not need to ask the user about it.', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gpt-5 models tend to be confused with this MCP server tools' rule (having the repo name in the MCP server URL), so added additional instructions. This works for both gpt-4 and gpt-5.
@@ -1,49 +0,0 @@ | |||
import { Agent, run, MCPServerStreamableHttp, withTrace } from '@openai/agents'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this example code no longer works so deleted it; we may have a working example again, but this is just demonstrating MCP hooks, so it's not our priority
name: 'File searcher', | ||
instructions: 'You are a helpful agent.', | ||
tools: [ | ||
fileSearchTool(['vs_67bf88953f748191be42b462090e53e7'], { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replaced a specific vector store id
this.toolUseBehavior = config.toolUseBehavior ?? 'run_llm_again'; | ||
this.resetToolChoice = config.resetToolChoice ?? true; | ||
|
||
if ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I image reviewers may think "Is this necessary?" but it's necessary for smoother developer experience. See also: openai/openai-agents-python#1534
} | ||
} | ||
|
||
/** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above; without this logic, having non-gpt-5 model on the Runner side fails with OPENAI_DEFAULT_MODEL=gpt-5; this SDK can do further instead of asking developers ot be careful.
|
||
// if there is no output we just run again | ||
if (!potentialFinalOutput) { | ||
if (typeof potentialFinalOutput === 'undefined') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed this because GPT-5 models return empty message output text when returning image generation tool output. Checking if the output is undefined is actually intended here.
9fd5206
to
04a1b30
Compare
This pull request is based on #362, and it is still work-in-progress.