Think MCP Host (AI·Zen·Love) is a Model Context Protocol (MCP) based intelligent agent application that supports various types of large language models, including standard conversational models (LLM), vision language models (VLM), and reasoning models.
-
Complete MCP (Model Context Protocol) Implementation
- Full MCP architecture support (Host/Client/Server)
- Comprehensive MCP resource types support
- Resources: Dynamic integration of external content
- Prompts: Template-based system prompts
- Tools: AI-powered function calls
- Dynamic MCP command insertion anywhere in conversations
- Seamless integration of resources into context
- On-demand prompt template usage
- Direct tool execution within chat
- Standalone MCP tool execution support
-
Extensive Model Support
- LLM (Language Models)
- Text conversations and content generation
- Programming and code assistance
- Document writing and analysis
- VLM (Vision Language Models)
- Image understanding and analysis
- Visual content processing
- Reasoning Models
- Complex logical analysis
- Professional domain reasoning
- Multiple provider support (DeepSeek, OpenAI, OpenRouter, etc.)
- LLM (Language Models)
-
Advanced Conversation Management
- Automatic conversation history saving
- Manual save options with countdown timer
- Historical conversation loading
- Multiple export format support
-
System Features
- Rich Terminal Interface
- Beautiful markdown rendering in terminal
- Syntax highlighting for code blocks
- Unicode and emoji support
- Interactive command suggestions
- Cross-Platform Support
- Full functionality on Windows, macOS, and Ubuntu
- Native installation support for each platform
- Consistent user experience across systems
- Command-line interface
- Debug mode support
- Flexible exit options with save/discard choices
- Rich Terminal Interface
The program supports two main running modes:
-
Chat Mode (Default)
- Used for natural language dialogue
- Supports multiple LLM models
- Can use MCP enhancement features
-
Tool Mode
- Used for using specific AI tools
- Directly calls functions provided by MCP server
- Select Running Mode
- After program startup, you will be prompted to select running mode
- Enter
1
to select Chat mode - Enter
2
to select Tool mode
-
Chat Mode Setup Process
-
- System will display available model list
- Enter corresponding number to select model
- Supported models include DeepSeek, Silicon Flow, Volcano Engine, etc.
-
- Option 1: Set system prompt, then start new conversation
- Option 2: Directly start new conversation (default)
- Option 3: Load historical conversation
-
System Prompt Setting (if Option 1 was selected)
- Can input custom system prompt
- Supports using
->mcp
command to insert MCP resources
-
Load Historical Conversation (if Option 3 was selected)
- System will display available historical conversation list
- Select conversation record to load
-
-
Tool Mode Setup Process
-
- System will display available MCP client list
- Select client to use
-
- Displays tool list provided by selected client
- Select specific tool to use
-
- Provide necessary parameters according to tool requirements
- View tool execution results
-
Continue or Exit
- Choose whether to continue using other tools
- Can switch back to Chat mode at any time
-
- Start Conversation
- Directly input text to converse
- Use
Ctrl+C
to exit program
During conversation, you can use the ->mcp
command to use MCP's enhancement features. Steps are as follows:
-
Activate MCP Command
- Input
->mcp
alone and press Enter in conversation - System will guide you through subsequent selections
- Input
-
- System will display available MCP client list
- Select client to use
-
Select MCP Feature Type
System will prompt you to select one of three types:
-
- Input
1
to select - Used for selecting and referencing external resources (like images, documents, etc.)
- Returns format:
->mcp_resources[client_name]:resourceURI
- Input
-
- Input
2
to select - Used for selecting predefined prompt templates
- Returns format:
->mcp_prompts[client_name]:prompt_name{parameters}
- Input
-
- Input
3
to select - Used for selecting and using specific AI tools
- Returns format:
->mcp_tools[client_name]:tool_name{parameters}
- Input
-
-
- After selection is complete, system will insert corresponding MCP command in conversation
- You can continue editing message, or send directly
-
Select Running Mode
- After program startup, you will be prompted to select running mode
- Enter
1
to select Chat mode - Enter
2
to select Tool mode
-
Chat Mode Setup Process
-
- System will display available model list
- Enter corresponding number to select model
- Supported models include DeepSeek, Silicon Flow, Volcano Engine, etc.
-
- Option 1: Set system prompt, then start new conversation
- Option 2: Directly start new conversation (default)
- Option 3: Load historical conversation
-
System Prompt Setting (if Option 1 was selected)
- Can input custom system prompt
- Supports using
->mcp
command to insert MCP resources
-
Load Historical Conversation (if Option 3 was selected)
- System will display available historical conversation list
- Select conversation record to load
-
-
Tool Mode Setup Process
-
- System will display available MCP client list
- Select client to use
-
- Displays tool list provided by selected client
- Select specific tool to use
-
- Provide necessary parameters according to tool requirements
- View tool execution results
-
Continue or Exit
- Choose whether to continue using other tools
- Can switch back to Chat mode at any time
-
- Start Conversation
- Directly input text to converse
- Use
Ctrl+C
to exit program
During conversation, you can use the ->mcp
command to use MCP's enhancement features. Steps are as follows:
-
Activate MCP Command
- Input
->mcp
alone and press Enter in conversation - System will guide you through subsequent selections
- Input
-
- System will display available MCP client list
- Select client to use
-
Select MCP Feature Type
System will prompt you to select one of three types:
-
- Input
1
to select - Used for selecting and referencing external resources (like images, documents, etc.)
- Returns format:
->mcp_resources[client_name]:resourceURI
- Input
-
- Input
2
to select - Used for selecting predefined prompt templates
- Returns format:
->mcp_prompts[client_name]:prompt_name{parameters}
- Input
-
- Input
3
to select - Used for selecting and using specific AI tools
- Returns format:
->mcp_tools[client_name]:tool_name{parameters}
- Input
-
-
- After selection is complete, system will insert corresponding MCP command in conversation
- You can continue editing message, or send directly
Before installing from package repositories, you can install the project directly from source for development:
# Clone the repository
git clone https://github.com/thinkthinking/think-mcp-host.git
cd think-mcp-host
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Linux/macOS
# or
.venv\Scripts\Activate.ps1 # On Windows with PowerShell
# Install in development mode with pip
pip install -e .
# or with uv (recommended)
uv pip install -e .
- Installation methods
- Download and double-click
AI-Zen-Love.exe
- Or install and run via command line:
- Download and double-click
# Install uv using pip
python -m pip install uv
# Clone the project and install
git clone https://github.com/thinkthinking/think-mcp-host.git
cd think-mcp-host
python -m venv .venv
.venv\Scripts\Activate.ps1
uv pip install -e .
- Configuration file locations
- LLM configuration:
C:\Users\your-username\.think-llm-client\config\servers_config.json
- MCP configuration:
C:\Users\your-username\.think-mcp-client\config\mcp_config.json
- History records:
C:\Users\your-username\.think-mcp-host\command_history\
- LLM configuration:
- Installation methods
- Download and double-click
AI-Zen-Love.app
- Or install and run via terminal:
- Download and double-click
# Install uv
python3 -m pip install uv
# Clone the project and install
git clone https://github.com/thinkthinking/think-mcp-host.git
cd think-mcp-host
python3 -m venv .venv
source .venv/bin/activate
uv pip install -e .
- Configuration file locations
- LLM configuration:
/Users/your-username/.think-llm-client/config/servers_config.json
- MCP configuration:
/Users/your-username/.think-mcp-client/config/mcp_config.json
- History records:
/Users/your-username/.think-mcp-host/command_history/
- LLM configuration:
The project supports three types of models:
-
LLM (Language Models)
- Used for: Text conversations, code writing, document generation
- Examples: DeepSeek Chat, GPT-4
-
VLM (Vision Language Models)
- Used for: Image understanding and analysis
- Examples: GPT-4-Vision, Qwen-VL-Plus
-
Reasoning Models
- Used for: Complex reasoning and professional analysis
- Examples: DeepSeek Reasoner, DeepSeek-R1
The configuration file uses JSON format and needs to be configured according to different model types:
{
"llm": {
"providers": {
"deepseek": {
"api_key": "<DEEPSEEK_API_KEY>",
"api_url": "https://api.deepseek.com",
"model": {
"deepseek-chat": {
"max_completion_tokens": 8192
}
}
}
}
},
"vlm": {
"providers": {
"openai": {
"api_key": "<OPENAI_API_KEY>",
"api_url": "https://api.openai.com/v1",
"model": {
"gpt-4-vision": {
"max_completion_tokens": 4096
}
}
}
}
},
"reasoning": {
"providers": {
"deepseek": {
"api_key": "<DEEPSEEK_API_KEY>",
"api_url": "https://api.deepseek.com",
"model": {
"deepseek-reasoner": {
"max_completion_tokens": 8192,
"temperature": 0.6
}
}
}
}
}
}
Configuration explanation:
- Choose the configuration area according to model type (llm/vlm/reasoning)
- Multiple providers can be configured under each type Configuration instructions for different providers are as follows:
- DeepSeek Documentation: https://api-docs.deepseek.com/en/
- Silicon Flow Documentation: https://docs.siliconflow.cn/en/userguide/quickstart#4-siliconcloud-api-genai
- Volcano Engine Documentation: https://www.volcengine.com/docs/82379/1399008
- Each provider needs to configure:
api_key
: API keyapi_url
: API server addressmodel
: Specific model configurationmax_completion_tokens
: Maximum output lengthtemperature
: Temperature parameter (optional)
MCP (Model Context Protocol) server configuration example:
{
"mcpServers": {
"think-mcp": {
"command": "/opt/homebrew/bin/uv",
"args": [
"--directory",
"/Users/thinkthinking/src_code/nas/think-mcp",
"run",
"think-mcp"
]
}
}
}
The following MCP command formats can be used in conversations:
- Interactive Selection
->mcp
This will start an interactive selection interface, guiding you to choose:
- MCP client
- Operation type (Resources/Prompts/Tools)
- Specific resource/prompt/tool
- Related parameters (if needed)
- Direct Usage
# Use resources
->mcp_resources[client_name]:resource_uri
# Use prompts
->mcp_prompts[client_name]:prompt_name{param1:value1,param2:value2}
# Use tools
->mcp_tools[client_name]:tool_name{param1:value1,param2:value2}
Examples:
# Use prompts
->mcp_prompts[think-mcp]:agent-introduction{agent_name:AI Assistant,agent_description:A friendly AI assistant}
# Use tools
->mcp_tools[think-mcp]:analyze_content{text:This is a test text}
- Support for multiple MCP commands in the same input
- Commands can be edited and modified at any time
- Parameters support flexible key-value pair format
- Friendly error prompts
### MCP Commands
The following MCP command formats can be used in conversations:
1. Interactive Selection
```bash
->mcp
This will start an interactive selection interface, guiding you to choose:
- MCP client
- Operation type (Resources/Prompts/Tools)
- Specific resource/prompt/tool
- Related parameters (if needed)
- Direct Usage
# Use resources
->mcp_resources[client_name]:resource_uri
# Use prompts
->mcp_prompts[client_name]:prompt_name{param1:value1,param2:value2}
# Use tools
->mcp_tools[client_name]:tool_name{param1:value1,param2:value2}
Examples:
# Use prompts
->mcp_prompts[think-mcp]:agent-introduction{agent_name:AI Assistant,agent_description:A friendly AI assistant}
# Use tools
->mcp_tools[think-mcp]:analyze_content{text:This is a test text}
- Support for multiple MCP commands in the same input
- Commands can be edited and modified at any time
- Parameters support flexible key-value pair format
- Friendly error prompts
To release a new version, follow these steps:
-
Update the version number:
- Update the
version
field inpyproject.toml
- Follow Semantic Versioning
- Update the
-
Commit changes:
git add pyproject.toml
git commit -m "chore: bump version to x.x.x"
- Create version tag:
git tag vx.x.x
git push origin vx.x.x