-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ToolCall refactor #26
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, just a few clarifications
docs/function_calling.rst
Outdated
@@ -137,6 +137,10 @@ Next Actor | |||
After a function call returns, kani will hand control back to the LM to generate a response by default. If instead | |||
control should be given to the human (i.e. return from the chat round), set ``after=ChatRole.USER``. | |||
|
|||
.. note:: | |||
If the model calls multiple tools in parallel, the model will be allowed to generate a response if *any* function | |||
allows it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting behavior (and not one that I particularly expected). I kind of expected the behavior to be waiting on all functions to finish instead of returning if any finish.
Does OpenAI allow us to specify that we want to wait on all functions? Am I misunderstanding this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you might be misunderstanding - this note is specifically for if multiple different @ai_function
s with different after=ChatRole.USER/ChatRole.ASSISTANT
are used in a single parallel tool call. It does wait for all to finish before returning control to the user/model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah got it, ty
.. note:: | ||
If the model calls multiple tools in parallel, the model will be allowed a retry if *any* exception handler | ||
allows it. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is also interesting how parallel functions play into retry logic. I'm assuming we increment retries if any of the functions fail? What happens when multiple fail (is that +1 or +n to retry count). If one of n functions fail, does the model retry all of the calls? or just the failed one. Maybe it's worth some clarification here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since retries are controlled by the model anyway, the retry param is kind of fuzzy - it means more like "allow the model to retry this call if it feels like it". If one of n functions fail, the return to the model will include all n results, with the failed one noting an exception - the model can then choose to retry calling the one it failed if it feels like it. If multiple fail, that adds 1 to the retry count (think of it as number of retries allowed per round). Will clarify
from kani.engines.openai import OpenAIEngine | ||
from kani.models import ToolCall |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ToolCall
is part of kani.models
and FunctionCall
is part of kani
. Is there a good reason for this?
EDIT: Seems like later down FunctionCall
is also imported from .models
so maybe there's just an inconsistency...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, this is just a result of pycharm's autoimport not resolving to the one from the root
raise ToolCallError( | ||
"This message contains multiple tool calls; iterate over `.tool_calls` instead of using" | ||
" `.function_call`." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a warning? Might be a bit harsh to error here...
As of Nov 6, 2023, OpenAI added the ability for a single assistant message to request calling multiple functions in
parallel, and wrapped all function calls in a
ToolCall
wrapper. In order to add support for this in kani whilemaintaining backwards compatibility with OSS function calling models, a
ChatMessage
now actually maintains thefollowing internal representation:
ChatMessage.function_call
is actually an alias forChatMessage.tool_calls[0].function
. If there is morethan one tool call in the message, when trying to access this property, kani will raise an exception.
To translate kani's FUNCTION message types to OpenAI's TOOL message types, the OpenAIEngine now performs a translation based on binding free tool call IDs to following FUNCTION messages deterministically.
Breaking Changes
To the kani end user, there should be no change to how functions are defined and called. One breaking change was necessary:
Kani.do_function_call
andKani.handle_function_call_exception
now take an additionaltool_call_id
parameter, which may break overriding functions. The documentation has been updated to encourage overriders to handle*args, **kwargs
to prevent this happening again.New Features
kani can now handle making multiple function calls in parallel if the model requests it. Rather than returning an ASSISTANT message with a single
function_call
, an engine can now return a list oftool_calls
. kani will resolve these tool calls in parallel using asyncio, and add their results to the chat history in the order of the list provided.Returning a single
function_call
will continue to work for backwards compatibility.