Skip to content

Agents Refactor 2/2 (client side): Use capabilities for LLMs, agent impl.#583

Merged
GitOnUp merged 10 commits intomainfrom
george/agents-refactor
Oct 24, 2023
Merged

Agents Refactor 2/2 (client side): Use capabilities for LLMs, agent impl.#583
GitOnUp merged 10 commits intomainfrom
george/agents-refactor

Conversation

@GitOnUp
Copy link
Copy Markdown
Contributor

@GitOnUp GitOnUp commented Oct 12, 2023

Implement SteamshipLLM and Plugin Capabilities via Agents.

Highlights:

  • FunctionsBasedAgent in steamship.agents.functions_based, mostly copied from the existing one, but using plugin capabilities.
  • Example Tool for CustomLLMPrompt takes Steamship LLM, which assumes you're providing a PluginInstance. That said, the PluginInstance might be more direct.
  • Factored chat history building out into shared code.
  • This change needs plugins to adapt to new behavior, coming in another PR.

@GitOnUp GitOnUp changed the title Agents Agents Refactor 2/2 (client side): Use capabilities for LLMs, agent impl. Oct 12, 2023
Comment thread src/steamship/agents/functions_based.py Outdated
Comment on lines +61 to +65
# TODO Block parse for input. text is the default argument that we currently pass in for Tools. As
# part of a refactor to allow for other parameters, this would need to change.
future_action = Action(
tool=tool.name, input=[Block(text=invocation.args.get("text"))], output=None
)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a tad awkward at the moment without doing an args / tools refactor as well.

Comment thread src/steamship/agents/functions_based.py Outdated
raise SteamshipError(
f"LLM attempted to invoke tool {invocation.tool_name}, but {self.__class__.__name__} does not have a tool with that name."
)
# TODO (PR): Could overload Action for this case to piggy back Invocations on it, but ehhhhhhhhh...
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Callout is mainly for consistency with Action, here.

return context.metadata.get(_LLM_KEY, default)


def build_chat_history(
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pulled generally applicable parts out of functions based agent.

Comment on lines +117 to +118
# TODO (PR): we're asserting capabilities support in next_action so the "name" tag is no longer needed for
# backcompat as we won't be able to run against older versions anyway.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Callout for removal

@GitOnUp GitOnUp marked this pull request as ready for review October 12, 2023 07:30
@GitOnUp GitOnUp requested review from douglas-reid and eob October 12, 2023 07:30
default_system_message: str, message_selector: MessageSelector, context: AgentContext
) -> List[Block]:
# system message should have already been created in context, but we double-check for safety
if context.chat_history.last_system_message:
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also noticing while doing plugin implementations that we tend to not dedupe between this and "default_system_prompt" in configs, which seems like it could confuse some models.

eob
eob previously approved these changes Oct 13, 2023
Copy link
Copy Markdown
Contributor

@eob eob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woohoooo!!!!!!!

This looks super clean to me. I'm not the authority @douglas-reid is on the function calling semantics though.

If there are tests that use the (to be deprecated) functional/functions_calling agent, maybe we could copy-paste them and replace that agent with this one?

And if not, we can just follow up with a unit testing PR where a few of us pile in and add some to feel out the new semantics.

🚀 🚀

Comment thread src/steamship/agents/basic_chat.py Outdated
messages = build_chat_history(self.PROMPT, self.message_selector, context)

# call chat() with a hard-coded absense of tools
output_blocks = self.llm.generate(messages=messages, tools=[])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so clean!!

Comment thread src/steamship/agents/basic_chat.py Outdated
Co-authored-by: Ted Benson <edward.benson@gmail.com>
Comment thread src/steamship/agents/basic_chat.py Outdated
Comment thread src/steamship/agents/functional/functions_based.py Outdated
Comment thread src/steamship/agents/functions_based.py Outdated
Comment thread src/steamship/agents/functions_based.py
Comment thread src/steamship/agents/functions_based.py Outdated
# TODO Block parse for input. text is the default argument that we currently pass in for Tools. As
# part of a refactor to allow for other parameters, this would need to change.
future_action = Action(
tool=tool.name, input=[Block(text=invocation.args.get("text"))], output=None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this assuming the argument is "text" ? how does this deal with multi-media blocks that have args of "uuid" ?

Comment thread src/steamship/agents/functions_based.py Outdated
Comment thread src/steamship/agents/functions_based.py Outdated
Comment thread src/steamship/agents/basic_chat.py Outdated
messages = build_chat_history(self.PROMPT, self.message_selector, context)

# call chat() with a hard-coded absence of tools
output_blocks = self.llm.generate(messages=messages, tools=[])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@GitOnUp question while you are also looking at gpt-4 plugin: when there are no functions in the call, can we update our call to OpenAI to turn off function-selection? I think by default we use auto as the setting, but with the tags / your capability work, we should be able to decide up-front whether or not we even want functions as a possibility in the answer, right?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way that I'd default to doing that is to only turning them on if you get the capability defining functions, yes, and moreso that you'd disable it in the capability if you didn't want them (though the distinction in that specific capability may be moot since if you don't provide functions to call it doesn't make sense to try calling any)

@GitOnUp GitOnUp requested a review from douglas-reid October 20, 2023 18:19
Copy link
Copy Markdown
Contributor

@douglas-reid douglas-reid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this all seems fine. thanks for adding the prefixes. will be cool when we can transition. thanks for your continued work here.

@GitOnUp GitOnUp merged commit 5425a3d into main Oct 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants