A powerful, feature-rich command-line interface for interacting with Model Context Provider servers. This client enables seamless communication with LLMs through integration with the CHUK-MCP protocol library which is a pyodide compatible pure python protocol implementation of MCP, supporting tool usage, conversation management, and multiple operational modes.
The core protocol implementation has been moved to a separate package at: https://github.com/chrishayuk/chuk-mcp
This CLI is built on top of the protocol library, focusing on providing a rich user experience while the protocol library handles the communication layer.
-
Multiple Operational Modes:
- Chat Mode: Conversational interface with direct LLM interaction and automated tool usage
- Interactive Mode: Command-driven interface for direct server operations
- Command Mode: Unix-friendly mode for scriptable automation and pipelines
- Direct Commands: Run individual commands without entering interactive mode
-
Multi-Provider Support:
- OpenAI integration (
gpt-4o-mini
,gpt-4o
,gpt-4-turbo
, etc.) - Ollama integration (
llama3.2
,qwen2.5-coder
, etc.) - Extensible architecture for additional providers
- OpenAI integration (
-
Robust Tool System:
- Automatic discovery of server-provided tools
- Server-aware tool execution
- Tool call history tracking and analysis
- Support for complex, multi-step tool chains
-
Advanced Conversation Management:
- Complete conversation history tracking
- Filtering and viewing specific message ranges
- JSON export capabilities for debugging or analysis
- Conversation compaction for reduced token usage
-
Rich User Experience:
- Command completion with context-aware suggestions
- Colorful, formatted console output
- Progress indicators for long-running operations
- Detailed help and documentation
-
Resilient Resource Management:
- Proper cleanup of asyncio resources
- Graceful error handling
- Clean terminal restoration
- Support for multiple simultaneous server connections
- Python 3.11 or higher
- For OpenAI: Valid API key in
OPENAI_API_KEY
environment variable - For Ollama: Local Ollama installation
- Server configuration file (default:
server_config.json
) - CHUK-MCP protocol library
- Clone the repository:
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
- Install the package with development dependencies:
pip install -e ".[cli,dev]"
- Run the CLI:
mcp-cli --help
If you prefer using UV for dependency management:
# Install UV if not already installed
pip install uv
# Install dependencies
uv sync --reinstall
# Run using UV
uv run mcp-cli --help
Global options available for all commands:
--server
: Specify the server(s) to connect to (comma-separated for multiple)--config-file
: Path to server configuration file (default:server_config.json
)--provider
: LLM provider to use (openai
orollama
, default:openai
)--model
: Specific model to use (provider-dependent defaults)--disable-filesystem
: Disable filesystem access (default: true)
Chat mode provides a conversational interface with the LLM, automatically using available tools when needed:
mcp-cli chat --server sqlite
With specific provider and model:
mcp-cli chat --server sqlite --provider openai --model gpt-4o
mcp-cli chat --server sqlite --provider ollama --model llama3.2
In chat mode, use these slash commands:
/help
: Show available commands/help <command>
: Show detailed help for a specific command/quickhelp
or/qh
: Display a quick reference of common commandsexit
orquit
: Exit chat mode
/tools
: Display all available tools with their server information/tools --all
: Show detailed tool information including parameters/tools --raw
: Show raw tool definitions
/toolhistory
or/th
: Show history of tool calls in the current session/th <N>
: Show details for a specific tool call/th -n 5
: Show only the last 5 tool calls/th --json
: Show tool calls in JSON format
/conversation
or/ch
: Show the conversation history/ch <N>
: Show a specific message from history/ch -n 5
: Show only the last 5 messages/ch <N> --json
: Show a specific message in JSON format/ch --json
: View the entire conversation history in raw JSON format
/save <filename>
: Save conversation history to a JSON file/compact
: Condense conversation history into a summary
/cls
: Clear the screen while keeping conversation history/clear
: Clear both the screen and conversation history/verbose
or/v
: Toggle between verbose and compact tool display modes
/interrupt
,/stop
, or/cancel
: Interrupt running tool execution/provider <n>
: Change the current LLM provider/model <n>
: Change the current LLM model/servers
: List connected servers and their status
Interactive mode provides a command-line interface with slash commands for direct server interaction:
mcp-cli interactive --server sqlite
In interactive mode, use these commands:
/ping
: Check if server is responsive/prompts
: List available prompts/tools
: List available tools/tools-all
: Show detailed tool information with parameters/tools-raw
: Show raw tool definitions in JSON/resources
: List available resources/chat
: Enter chat mode/cls
: Clear the screen/clear
: Clear the screen and show welcome message/help
: Show help message/exit
or/quit
: Exit the program
Command mode provides a Unix-friendly interface for automation and pipeline integration:
mcp-cli cmd --server sqlite [options]
This mode is designed for scripting, batch processing, and direct integration with other Unix tools.
--input
: Input file path (use-
for stdin)--output
: Output file path (use-
for stdout, default)--prompt
: Prompt template (use{{input}}
as placeholder for input)--raw
: Output raw text without formatting--tool
: Directly call a specific tool--tool-args
: JSON arguments for tool call--system-prompt
: Custom system prompt
Process content with LLM:
# Summarize a document
mcp-cli cmd --server sqlite --input document.md --prompt "Summarize this: {{input}}" --output summary.md
# Process stdin and output to stdout
cat document.md | mcp-cli cmd --server sqlite --input - --prompt "Extract key points: {{input}}"
Call tools directly:
# List database tables
mcp-cli cmd --server sqlite --tool list_tables --raw
# Run a SQL query
mcp-cli cmd --server sqlite --tool read_query --tool-args '{"query": "SELECT COUNT(*) FROM users"}'
Batch processing:
# Process multiple files with GNU Parallel
ls *.md | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary.md --prompt "Summarize: {{input}}"
Run individual commands without entering interactive mode:
# List available tools
mcp-cli tools list --server sqlite
# Call a specific tool
mcp-cli tools call --server sqlite
# List available prompts
mcp-cli prompts list --server sqlite
# Check server connectivity
mcp-cli ping --server sqlite
# List available resources
mcp-cli resources list --server sqlite
Create a server_config.json
file with your server configurations:
{
"mcpServers": {
"sqlite": {
"command": "python",
"args": ["-m", "mcp_server.sqlite_server"],
"env": {
"DATABASE_PATH": "your_database.db"
}
},
"another-server": {
"command": "python",
"args": ["-m", "another_server_module"],
"env": {}
}
}
}
src/
βββ mcp_cli/
β βββ chat/ # Chat mode implementation
β β βββ commands/ # Chat slash commands
β β β βββ __init__.py # Command registration system
β β β βββ conversation.py # Conversation management
β β β βββ conversation_history.py
β β β βββ exit.py
β β β βββ help.py
β β β βββ help_text.py
β β β βββ models.py
β β β βββ servers.py
β β β βββ tool_history.py
β β β βββ tools.py
β β βββ chat_context.py # Chat session state management
β β βββ chat_handler.py # Main chat loop handler
β β βββ command_completer.py # Command completion
β β βββ conversation.py # Conversation processor
β β βββ system_prompt.py # System prompt generator
β β βββ tool_processor.py # Tool handling
β β βββ ui_manager.py # User interface
β βββ commands/ # CLI commands
β β βββ __init__.py
β β βββ chat.py # Chat command
β β βββ cmd.py # Command mode
β β βββ interactive.py # Interactive mode
β β βββ ping.py # Ping command
β β βββ prompts.py # Prompts commands
β β βββ register_commands.py # Command registration
β β βββ resources.py # Resources commands
β β βββ tools.py # Tools commands
β βββ llm/ # LLM client implementations
β β βββ providers/ # Provider-specific clients
β β β βββ __init__.py
β β β βββ base.py # Base LLM client
β β β βββ openai_client.py # OpenAI implementation
β β βββ llm_client.py # Client factory
β β βββ system_prompt_generator.py # Prompt generator
β β βββ tools_handler.py # Tools handling
β βββ ui/ # User interface components
β β βββ colors.py # Color definitions
β β βββ ui_helpers.py # UI utilities
β βββ cli_options.py # CLI options processing
β βββ config.py # Configuration loader
β βββ main.py # Main entry point
β βββ run_command.py # Command execution
The MCP CLI can automatically execute tools provided by the server. In chat mode, simply request information that requires tool usage, and the LLM will automatically select and call the appropriate tools.
Example conversation:
You: What tables are available in the database?
Assistant: Let me check for you.
[Tool Call: list_tables]
I found the following tables in the database:
- users
- products
- orders
- categories
You: How many users do we have?
Assistant: I'll query the database for that information.
[Tool Call: read_query]
There are 873 users in the database.
Command mode enables powerful automation through shell scripts:
#!/bin/bash
# Example script to analyze multiple documents
# Process all markdown files in the current directory
for file in *.md; do
echo "Processing $file..."
# Generate summary
mcp-cli cmd --server sqlite --input "$file" \
--prompt "Summarize this document: {{input}}" \
--output "${file%.md}.summary.md"
# Extract entities
mcp-cli cmd --server sqlite --input "$file" \
--prompt "Extract all company names, people, and locations from this text: {{input}}" \
--output "${file%.md}.entities.txt" --raw
done
# Create a combined report
echo "Creating final report..."
cat *.entities.txt | mcp-cli cmd --server sqlite --input - \
--prompt "Analyze these entities and identify the most frequently mentioned:" \
--output report.md
Track and manage your conversation history:
> /conversation
Conversation History (12 messages)
# | Role | Content
1 | system | You are an intelligent assistant capable of using t...
2 | user | What tables are available in the database?
3 | assistant | Let me check for you.
4 | assistant | [Tool call: list_tables]
...
> /conversation 4
Message #4 (Role: assistant)
[Tool call: list_tables]
Tool Calls:
1. ID: call_list_tables_12345678, Type: function, Name: list_tables
Arguments: {}
> /save conversation.json
Conversation saved to conversation.json
> /compact
Conversation history compacted with summary.
Summary:
The user asked about database tables, and I listed the available tables (users, products, orders, categories). The user then asked about the number of users, and I queried the database to find there are 873 users.
The CLI is organized with optional dependency groups:
- cli: Rich terminal UI, command completion, and provider integrations
- dev: Development tools and testing utilities
- wasm: (Reserved for future WebAssembly support)
- chuk-mcp: Protocol implementation library (core dependency)
Install with specific extras using:
pip install "mcp-cli[cli]" # Basic CLI features
pip install "mcp-cli[cli,dev]" # CLI with development tools
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Anthropic Claude for assistance with code development
- Rich for beautiful terminal formatting
- Typer for CLI argument parsing
- Prompt Toolkit for interactive input
- CHUK-MCP for the core protocol implementation