Your AI has too many tools? hitting token limits? Don’t want to deal with LangGraph and want a prebuilt solution that handles the issue?
Note: You can also give your own tool_selector function to Agentami, and it will use that instead of the internal one.
Refer the main.py file for a complete sample usage.
pip install agentami
from agentami import AgentAmifrom langchain.chat_models import ChatOpenAI
from langgraph.checkpoint.memory import InMemorySaver
from agentami.agents.ami import AgentAmi
# Replace ... (ellipsis) with the commented instructions
tools = [...] # List of LangChain-compatible tools
agent = AgentAmi(
model=ChatOpenAI(model="gpt-4o"),
tools=tools, # List of LangChain-compatible tools
checkpointer=InMemorySaver(), # Optional. No persistence if omitted.
# Optional parameters:
tool_selector=..., # Custom function to select relevant tools. Defaults to internal tool_selector.
top_k=..., # Number of top tools to use. Defaults to 3.
context_size=..., # Number of past user prompts to retain. Defaults to 7.
disable_pruner=..., # If True, disables pruning & will increase token usage. Defaults to False
prompt_template=... # Custom prompt template. Defaults to a generic bot template.
)
agent_ami = agent.graph # Your regular langgraph's graph.- Running for the first time will take time as it installs the dependencies (models used by internal tool_selector).
- Your first
agent_ami.invoke() or agent_agent_ami.astream()may take time if you have hundreds of tools, because it initialises a vector store and embeds the tool descriptions at runtime for each AgentAmi() object - Your eventual prompts' response time would be fine.
- Checkout ROADMAP.md file for future features.
Just make a function that accepts (query: str, top_k: int) and parameters and returns List[str] #List of tool names.
from typing import List
# function template:
def my_own_tool_selector(query: str, top_k: int) -> List[str]:
# Your logic to select tools based on the query
return ["tool1", "tool2", "tool3"] # Return top_k selected tool names