-
Notifications
You must be signed in to change notification settings - Fork 582
fix(openai-agents): Patch models functions following library refactor #5449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(openai-agents): Patch models functions following library refactor #5449
Conversation
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. Bug Fixes 🐛Openai Agents
Other
Documentation 📚
Internal Changes 🔧
🤖 This preview updates automatically when you update the PR. |
Codecov Results 📊✅ 56 passed | Total: 56 | Pass Rate: 100% | Execution Time: 7.67s 📊 Comparison with Base Branch
All tests are passing successfully. ❌ Patch coverage is 1.33%. Project has 12820 uncovered lines. Files with missing lines (181)
Coverage diff@@ Coverage Diff @@
## main #PR +/-##
==========================================
+ Coverage 25.75% 35.11% +9.36%
==========================================
Files 189 189 —
Lines 19765 19758 -7
Branches 6408 6408 —
==========================================
+ Hits 5090 6938 +1848
- Misses 14675 12820 -1855
- Partials 431 600 +169Generated by Codecov Action |
ericapisani
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small things, but otherwise LGTM 🚀
| @wraps(original_fetch_response) | ||
| async def wrapped_fetch_response(*args: "Any", **kwargs: "Any") -> "Any": | ||
| response = await original_fetch_response(*args, **kwargs) | ||
| if hasattr(response, "model") and response.model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: I think an alternative approach that you could take here in order to make this conditional more concise (by removing the and part of the conditional) is to do the following:
| if hasattr(response, "model") and response.model: | |
| if getattr(response, "model", None): |
This would also make things consistent with what you have on line 119 within the wrapped_get_response function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to rethink this response model business because we probably shouldn't be setting response models on agent spans either.
This was put in before I knew as much about agents, but there are libraries that let you vary the request model during agent execution as well (in particular Claude Code wrapped by the Claude Agent SDK).
Created an issue to track #5458
(and the plan would be to clean up as part of that issue).
| # Uses explicit try/finally instead of context manager to ensure cleanup | ||
| # even if the consumer abandons the stream (GeneratorExit). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment looks like it needs to be updated or removed as there's no try/finally below and the code uses a context manager 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're completely right, and it doesn't look like we catch GeneratorExit anywhere for that matter 😅 .
I'll keep this commit atomic since we're moving the contents of wrapped_get_model() to _get_model() as is now, and follow up with a PR after this first train is merged 🚅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
Description
The
AgentRunner._get_model()class method has been moved to the functionagents.run_internal.turn_preparation.get_model().Patch the new function by re-using existing wrapper logic.
Issues
Reminders
tox -e linters.feat:,fix:,ref:,meta:)