-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve instructions/tool handling on assistant calls #877
Conversation
When we started passing instructions/tools on every call, the code interpreter wasn't cast to an API type correctly, resulting in an error. This should fix it.
run_kwargs["instructions"] = instructions | ||
|
||
if tools := self._get_tools(): | ||
run_kwargs["tools"] = [t.model_dump(mode="json") for t in tools] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the bugfix - we don't let the openai library do the json casting, we do it ourselves
run_kwargs["model"] = self._get_model() | ||
def _get_run_kwargs(self, thread: Thread = None, **run_kwargs) -> dict: | ||
if instructions := self._get_instructions(thread=thread): | ||
run_kwargs["instructions"] = instructions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if there are instructions, we always provide them
) | ||
assert run.steps[0].step_details.tool_calls[0].type == "code_interpreter" | ||
output = float( | ||
run.steps[-2].step_details.tool_calls[0].code_interpreter.outputs[0].logs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool!
This fixes two small bugs that were unlikely to be encountered but present:
Applications
, where the instructions contain the latest state object (so they need to be constantly refreshed). In practice, this would only be experienced by users that were using manual lifecycle management; with lazy lifecycles, the assistant (and instructions) are created on every call, meaning the bug couldn't impact.