Conversation
There was a problem hiding this comment.
Pull request overview
This pull request implements shell command execution capabilities for the Agent Framework across OpenAI and Anthropic providers. The implementation adds a new ShellTool class that extends FunctionTool, along with shell-specific content types for structured representation of command execution and results. The PR enables both hosted shell execution (in OpenAI's managed containers) and local shell execution (on the developer's machine) with provider-specific API mappings.
Changes:
- Introduced
ShellToolclass and@shell_tooldecorator for creating shell command execution tools with provider-specific configuration support viaadditional_properties - Added three shell content types (
shell_tool_call,shell_tool_result,shell_command_output) with factory methods and serialization support - Implemented OpenAI Responses API integration with
get_hosted_shell_tool()for managed container execution and parsing logic for shell_call/shell_call_output - Integrated Anthropic bash tool support by detecting
ShellToolinstances and mapping them tobash_20250124API format, with updated result parsing to use shell content types - Provided three sample implementations demonstrating OpenAI hosted shell, OpenAI local shell with user confirmation, and Anthropic local shell execution
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
python/packages/core/agent_framework/_tools.py |
Adds ShellTool class extending FunctionTool and @shell_tool decorator with overloads for convenient shell tool creation |
python/packages/core/agent_framework/_types.py |
Adds shell content type definitions (shell_tool_call, shell_tool_result, shell_command_output) with factory methods and serialization support |
python/packages/core/agent_framework/__init__.py |
Exports ShellTool and shell_tool to public API |
python/packages/core/agent_framework/_agents.py |
Reformats ternary expression for improved readability (formatting only) |
python/packages/core/agent_framework/openai/_responses_client.py |
Implements get_hosted_shell_tool() method and parsing logic for shell_call and shell_call_output in both sync and streaming modes |
python/packages/anthropic/agent_framework_anthropic/_chat_client.py |
Adds ShellTool detection in _prepare_tools_for_anthropic() to map to bash API format, updates bash result parsing to use shell content types instead of generic function_result |
python/packages/core/tests/core/test_types.py |
Adds comprehensive tests for shell content type creation, properties, and serialization roundtrip |
python/packages/core/tests/openai/test_openai_responses_client.py |
Adds tests for get_hosted_shell_tool() and shell_call/shell_call_output parsing including timeout scenarios |
python/packages/anthropic/tests/test_anthropic_client.py |
Adds tests for ShellTool to Anthropic bash format conversion and updates bash result parsing tests to verify shell content types, including error handling |
python/packages/core/tests/workflow/test_workflow_kwargs.py |
Formatting change to fit long line within reasonable length (formatting only) |
python/packages/core/tests/workflow/test_agent_executor.py |
Formatting change to fit function signature on one line (formatting only) |
python/samples/02-agents/providers/openai/openai_responses_client_with_shell.py |
Sample demonstrating OpenAI hosted shell tool with managed container execution |
python/samples/02-agents/providers/openai/openai_responses_client_with_local_shell.py |
Sample demonstrating local shell execution with OpenAI, including user confirmation prompts |
python/samples/02-agents/providers/anthropic/anthropic_with_shell.py |
Sample demonstrating local shell execution with Anthropic bash tool, including user confirmation prompts |
python/packages/anthropic/agent_framework_anthropic/_chat_client.py
Outdated
Show resolved
Hide resolved
python/samples/02-agents/providers/openai/openai_responses_client_with_local_shell.py
Outdated
Show resolved
Hide resolved
…ent-framework into python-shell-tool
- Add shell_tool_call, shell_tool_result, and shell_command_output content types - Add ShellTool class and shell_tool decorator to core - Add get_hosted_shell_tool() to OpenAI Responses client - Handle shell_call and shell_call_output parsing in OpenAI (sync and streaming) - Map ShellTool to Anthropic bash tool API format - Parse bash_code_execution_tool_result as shell_tool_result in Anthropic - Add unit tests for all new functionality - Add sample scripts for hosted and local shell execution Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
eavanvalkenburg
left a comment
There was a problem hiding this comment.
I'm not convinced of this approach, I think adding a callable that executes to the get_hosted_shell_tool (maybe we should just call that get_shell_tool) and wrapping that runnable, setting the schema to the expected schema from the service and then having it act like a regular tool is maybe clearer and gives full discoverability to the user of what other settings they can set.
python/samples/02-agents/providers/anthropic/anthropic_with_shell.py
Outdated
Show resolved
Hide resolved
python/packages/core/agent_framework/openai/_responses_client.py
Outdated
Show resolved
Hide resolved
eavanvalkenburg
left a comment
There was a problem hiding this comment.
Couple of small comments left, but overall this looks much better, thanks for going through!
Motivation and Context
Contribution Checklist