Agents Refactor 2/2 (client side): Use capabilities for LLMs, agent impl.#583
Agents Refactor 2/2 (client side): Use capabilities for LLMs, agent impl.#583
Conversation
| # TODO Block parse for input. text is the default argument that we currently pass in for Tools. As | ||
| # part of a refactor to allow for other parameters, this would need to change. | ||
| future_action = Action( | ||
| tool=tool.name, input=[Block(text=invocation.args.get("text"))], output=None | ||
| ) |
There was a problem hiding this comment.
This is a tad awkward at the moment without doing an args / tools refactor as well.
| raise SteamshipError( | ||
| f"LLM attempted to invoke tool {invocation.tool_name}, but {self.__class__.__name__} does not have a tool with that name." | ||
| ) | ||
| # TODO (PR): Could overload Action for this case to piggy back Invocations on it, but ehhhhhhhhh... |
There was a problem hiding this comment.
Callout is mainly for consistency with Action, here.
| return context.metadata.get(_LLM_KEY, default) | ||
|
|
||
|
|
||
| def build_chat_history( |
There was a problem hiding this comment.
Pulled generally applicable parts out of functions based agent.
| # TODO (PR): we're asserting capabilities support in next_action so the "name" tag is no longer needed for | ||
| # backcompat as we won't be able to run against older versions anyway. |
There was a problem hiding this comment.
Callout for removal
| default_system_message: str, message_selector: MessageSelector, context: AgentContext | ||
| ) -> List[Block]: | ||
| # system message should have already been created in context, but we double-check for safety | ||
| if context.chat_history.last_system_message: |
There was a problem hiding this comment.
I'm also noticing while doing plugin implementations that we tend to not dedupe between this and "default_system_prompt" in configs, which seems like it could confuse some models.
eob
left a comment
There was a problem hiding this comment.
woohoooo!!!!!!!
This looks super clean to me. I'm not the authority @douglas-reid is on the function calling semantics though.
If there are tests that use the (to be deprecated) functional/functions_calling agent, maybe we could copy-paste them and replace that agent with this one?
And if not, we can just follow up with a unit testing PR where a few of us pile in and add some to feel out the new semantics.
🚀 🚀
| messages = build_chat_history(self.PROMPT, self.message_selector, context) | ||
|
|
||
| # call chat() with a hard-coded absense of tools | ||
| output_blocks = self.llm.generate(messages=messages, tools=[]) |
Co-authored-by: Ted Benson <edward.benson@gmail.com>
| # TODO Block parse for input. text is the default argument that we currently pass in for Tools. As | ||
| # part of a refactor to allow for other parameters, this would need to change. | ||
| future_action = Action( | ||
| tool=tool.name, input=[Block(text=invocation.args.get("text"))], output=None |
There was a problem hiding this comment.
is this assuming the argument is "text" ? how does this deal with multi-media blocks that have args of "uuid" ?
| messages = build_chat_history(self.PROMPT, self.message_selector, context) | ||
|
|
||
| # call chat() with a hard-coded absence of tools | ||
| output_blocks = self.llm.generate(messages=messages, tools=[]) |
There was a problem hiding this comment.
@GitOnUp question while you are also looking at gpt-4 plugin: when there are no functions in the call, can we update our call to OpenAI to turn off function-selection? I think by default we use auto as the setting, but with the tags / your capability work, we should be able to decide up-front whether or not we even want functions as a possibility in the answer, right?
There was a problem hiding this comment.
The way that I'd default to doing that is to only turning them on if you get the capability defining functions, yes, and moreso that you'd disable it in the capability if you didn't want them (though the distinction in that specific capability may be moot since if you don't provide functions to call it doesn't make sense to try calling any)
…thon-client into george/agents-refactor
douglas-reid
left a comment
There was a problem hiding this comment.
this all seems fine. thanks for adding the prefixes. will be cool when we can transition. thanks for your continued work here.
…eorge/agents-refactor
…thon-client into george/agents-refactor
Implement SteamshipLLM and Plugin Capabilities via Agents.
Highlights: