A web-based Python platform for connecting to multiple MCP servers, enabling natural language interactions with databases, file systems, and web services. Connect with pre-built or custom MCP implementations in a unified interface. Future integrations include Google Workspace, Microsoft 365, Slack, Salesforce, and GitHub. Free to use under the MIT license.
- MCP Tool Orchestration: Build and connect powerful LLM tools using standardized messaging protocols
- Flask Web Interface: Interact with AI agents through an intuitive, user-friendly dashboard
- LangChain & LangGraph Integration: Create sophisticated AI workflows with industry-standard frameworks
- Multi-Server Support: Connect to multiple tool servers simultaneously from a single interface
- Dynamic Server Management: Add, configure, and update tool servers at runtime without restarts
- Flask Web Application: Modern web interface serving as the command center for your AI tools
- MultiServerMCPClient: Advanced client that orchestrates connections to multiple tool servers
- LangChain React Agent: Intelligent decision-making system that chooses the right tools for each task
- MCP Servers: Specialized microservices that expose domain-specific tools through a standardized protocol
The simplest way to install Python MCP Client is via pip:
pip install python-mcp-client
You can find the package on PyPI at: https://pypi.org/project/python-mcp-client/
If you want the latest development version or plan to contribute:
-
Clone the repository:
git clone https://github.com/kernelmax/python-mcp-client.git cd python-mcp-client
-
Set up a virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows, use: .venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set environment variables:
export OPENAI_API_KEY=your-api-key-here
-
Start the Flask application:
python flask_app.py
-
Open your browser and navigate to
http://localhost:5008
You can also run Python MCP Client using Docker:
-
Pull the pre-built image from Docker Hub:
docker pull kernelmax/python-mcp-client
-
Run the container:
docker run -p 5008:5008 -e OPENAI_API_KEY=your-api-key-here kernelmax/python-mcp-client
-
Open your browser and navigate to
http://localhost:5008
For a more convenient setup, you can use Docker Compose:
-
Create a
docker-compose.yml
file or use the one provided in the repository:version: '3' services: mcp-app: image: kernelmax/python-mcp-client # or build from source: # build: . ports: - "5008:5008" environment: - OPENAI_API_KEY=${OPENAI_API_KEY} volumes: - ./templates:/app/templates restart: unless-stopped
-
Set your OpenAI API key in your environment:
export OPENAI_API_KEY=your-api-key-here
-
Start the service:
docker-compose up
-
Open your browser and navigate to
http://localhost:5008
# Example natural language query
"Show me all users in the database that registered in the last month"
# How the AI agent processes your request
# 1. The LLM agent interprets the natural language query
# 2. It selects the appropriate database tool
# 3. It generates and executes optimized SQL: "SELECT * FROM users WHERE registration_date >= DATE_SUB(NOW(), INTERVAL 1 MONTH)"
# 4. Results are returned in a human-readable format
# Example natural language command
"Create a new database called customer_analytics"
# How the AI agent executes your request
# 1. The LLM agent processes your instructions
# 2. It selects the database creation tool
# 3. It executes the appropriate command with error handling
# 4. Confirmation is provided with next steps
# Example natural language request
"List all Python files in the current directory"
# How the AI assistant helps you
# 1. The LLM agent processes your request
# 2. It selects the file system tools
# 3. It intelligently filters results for Python files
# 4. Results are displayed in an organized format
A powerful AI database interface providing tools for:
- SQL query execution with natural language translation
- Automated table creation, insertion, and data manipulation
- Database management with intelligent suggestions
- Schema visualization and exploration
An intelligent file system assistant with tools for:
- Context-aware file reading and analysis
- Smart file writing with formatting suggestions
- Automatic file creation with templates
- Directory organization and file discovery
- User Authentication: Secure access control with role-based permissions
- Database Engine Expansion: Support for PostgreSQL, MongoDB, and other databases
- Real-time Communication: WebSocket integration for live updates and responses
- Containerized Deployment: Docker compose setup for one-click deployment
- Comprehensive Testing: Extensive test suite for reliability and stability
- Session Persistence: Save and resume conversations with your AI tools
We plan to expand our MCP server ecosystem with integrations for popular platforms:
- Google Workspace: Connect to Gmail, Google Docs, Google Drive, and Google Calendar
- Microsoft 365: Integrate with Outlook, OneDrive, and Microsoft Teams
- Slack: Send messages, manage channels, and automate workflows
- Salesforce: Query customer data, manage leads, and update records
- Jira: Create and manage issues, track projects, and generate reports
- GitHub: Manage repositories, issues, and pull requests
- Zoho: Connect to Zoho CRM, Zoho Books, and other Zoho applications
- Zendesk: Handle support tickets and customer inquiries
- HubSpot: Manage marketing campaigns and customer relationships
- Notion: Create and update pages, databases, and workspaces
We enthusiastically welcome contributors of all experience levels! Whether you're fixing a typo, improving documentation, or adding a major feature, your help makes this project better.
- Code contributions: Add new features or fix bugs
- Documentation: Improve explanations, add examples, or fix typos
- Bug reports: Help us identify issues
- Feature requests: Suggest new capabilities
- User experience: Provide feedback on usability
- Testing: Help ensure everything works properly
If you're new to open source or this project, look for issues tagged with good-first-issue
or beginner-friendly
. These are carefully selected to be accessible entry points.
Need help? Join our community chat or ask questions in the issue you're working on.
- Fork the repository
- Clone your fork:
git clone https://github.com/YOUR-USERNAME/python-mcp-client.git cd python-mcp-client
- Create a feature branch:
git checkout -b feature/your-feature-name
- Make your changes
- Test your changes:
# Run tests to ensure nothing breaks pytest
- Commit your changes with a clear message:
git commit -m "Add: clear description of your changes"
- Push to your branch:
git push origin feature/your-feature-name
- Create a pull request with a description of your changes
All submissions require review before merging:
- A maintainer will review your PR
- They may request changes or clarification
- Once approved, your contribution will be merged
Thank you for contributing to make Python MCP Client better for everyone!
This project is tagged with the following GitHub topics to improve discoverability:
mcp
- Model Context Protocol implementationai-agent
- Artificial intelligence agent architecturelangchain
- LangChain framework integrationlanggraph
- LangGraph agent workflowspython-ai
- Python-based artificial intelligencellm-tools
- Large Language Model toolingllm-orchestration
- LLM tool orchestrationai-assistant
- AI assistant capabilitieslanguage-model-tools
- Tools for language modelsagent-framework
- Framework for building AI agentsmulti-tool-agent
- Agent with multiple tool capabilitiespython-llm
- Python LLM integrationopenai-integration
- OpenAI model integrationnatural-language-processing
- NLP capabilities
If you're forking or referencing this project, consider using these tags for consistency and to help others find related work.
When working with a fork of this repository, you can add these tags to improve its discoverability:
- Go to your fork on GitHub
- Click on the gear icon next to "About" on the right sidebar
- Enter relevant topics in the "Topics" field
- Click "Save changes"
Using consistent tagging helps build a connected ecosystem of related projects!
This project is fully open source and available under the MIT License. This means you are free to:
- Use the code commercially
- Modify the code
- Distribute your modifications
- Use privately
- Sublicense
We believe in the power of open source to drive innovation and make AI tools accessible to everyone. By making this project open source, we encourage collaboration, transparency, and community-driven development.
For questions, feature requests, or support, please open an issue on GitHub or contact the maintainers directly.