-
Notifications
You must be signed in to change notification settings - Fork 605
Description
Problem Statement
I would like strands to implement/expose the tools as functions for python execution code. https://www.anthropic.com/engineering/code-execution-with-mcp Research from multiple labs shows that we can improve accuracy (by responding with the final answer programmatically) and reduce tokens by as much as 98.7% by having code filter through tool responses, instead of just reading the json.
Proposed Solution
I am unsure if this can be implemented without a code execution environment, maybe we can build this on top of AWS Bedrock Agentcore Code Interpreter? And expose a final_answer tool along with all the other tools implemented to the python code.
Use Case
Here is an example snippet:
result_a = check_status(system="database")
if result_a["healthy"]:
# Only call this if healthy
data = query_database(sql="SELECT * FROM experiments")
else:
# Otherwise use fallback
data = load_from_cache(key="experiments_backup")
final_answer(data)
Two benefits here, we don't have to pull in the full context of the check_status code call into the agent's context. We also don't even need to load the full data in the llm's context window. We also can guarantee accuracy of the response since we directly output the data as the final output from the llm.
Alternatives Solutions
No response
Additional Context
See more examples here: https://huggingface.co/docs/smolagents/en/tutorials/tools