Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,6 @@ __pycache__/
.cursor/

# cache dir
.mcphub_cache/
.mcphub_cache/

.coverage
91 changes: 87 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@

MCPHub is an embeddable Model Context Protocol (MCP) solution for AI services. It enables seamless integration of MCP servers into any AI framework, allowing developers to easily configure, set up, and manage MCP servers within their applications. Whether you're using OpenAI Agents, LangChain, or Autogen, MCPHub provides a unified way to connect your AI services with MCP tools and resources.

## Documentation

- [CLI Documentation](src/mcphub/cli/README.md) - Command-line interface for managing MCP servers
- [API Documentation](docs/api.md) - Python API reference
- [Configuration Guide](docs/configuration.md) - Server configuration details
- [Examples](docs/examples.md) - Usage examples and tutorials

## Quick Start

### Prerequisites
Expand Down Expand Up @@ -49,6 +56,33 @@ Create a `.mcphub.json` file in your project root:
}
```

### Adding New MCP Servers

You can add new MCP servers in two ways:

1. **Manual Configuration**: Add the server configuration directly to your `.mcphub.json` file.

2. **Automatic Configuration from GitHub**: Use the `add_server_from_repo` method to automatically configure a server from its GitHub repository:

```python
from mcphub import MCPHub

# Initialize MCPHub
hub = MCPHub()

# Add a new server from GitHub
hub.servers_params.add_server_from_repo(
server_name="my-server",
repo_url="https://github.com/username/repo"
)
```

The automatic configuration:
- Fetches the README from the GitHub repository
- Uses OpenAI to analyze the README and extract the server configuration
- Adds the configuration to your `.mcphub.json` file
- Requires an OpenAI API key (set via `OPENAI_API_KEY` environment variable)

### Usage with OpenAI Agents

```python
Expand Down Expand Up @@ -175,20 +209,66 @@ Configure your MCP servers in `.mcphub.json`:
### Transport Support

- **stdio Transport**: Run MCP servers as local subprocesses
- **SSE Transport**: Run MCP servers with Server-Sent Events (SSE) support using supergateway
- **Automatic Path Management**: Manages server paths and working directories
- **Environment Variable Handling**: Configurable environment variables per server

#### Running Servers with SSE Support

You can run MCP servers with SSE support using the `mcphub run` command:

```bash
# Basic usage with default settings
mcphub run your-server-name --sse

# Advanced usage with custom settings
mcphub run your-server-name --sse \
--port 8000 \
--base-url http://localhost:8000 \
--sse-path /sse \
--message-path /message
```

SSE support is useful when you need to:
- Connect to MCP servers from web applications
- Use real-time communication with MCP servers
- Integrate with clients that support SSE

The SSE server provides two endpoints:
- `/sse`: SSE endpoint for real-time updates
- `/message`: HTTP endpoint for sending messages

Example configuration in `.mcphub.json`:
```json
{
"mcpServers": {
"sequential-thinking-mcp": {
"package_name": "smithery-ai/server-sequential-thinking",
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@smithery-ai/server-sequential-thinking",
"--key",
"your-api-key"
]
}
}
}
```

### Framework Integration

Provides adapters for popular AI frameworks:
- OpenAI Agents
- LangChain
- Autogen
- OpenAI Agents ([example](examples/with_openai.py))
- LangChain ([example](examples/with_langchain.py))
- Autogen ([example](examples/with_autogen.py))

```python
from mcphub import MCPHub

async def framework_examples():
async def framework_quick_examples():
hub = MCPHub()

# 1. OpenAI Agents Integration
Expand Down Expand Up @@ -230,6 +310,9 @@ from mcphub import MCPHub
async def tool_management():
hub = MCPHub()

# List all servers
servers = hub.list_servers()

# List all tools from a specific MCP server
tools = await hub.list_tools(mcp_name="sequential-thinking-mcp")

Expand Down
45 changes: 45 additions & 0 deletions examples/with_autogen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
"""
Example of using MCPHub with Autogen Agents.
1. Initialize MCPHub to manage MCP servers
2. Fetch MCP tools and adapters for Autogen
3. Create and run an agent with MCP tools
"""

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from mcphub import MCPHub


async def main():
# Initialize MCPHub - automatically loads .mcphub.json and sets up servers
hub = MCPHub()

# Fetch MCP tools adapted for Autogen
tool_adapters = await hub.fetch_autogen_mcp_adapters("azure-storage-mcp")
model_client = OpenAIChatCompletionClient(model="gpt-4")

# Create and run agent with MCP tools
complex_task = """Please help me analyze the following complex problem:
We need to design a new feature for our product that balances user privacy
with data collection for improving the service. Consider the ethical implications,
technical feasibility, and business impact. Break down your thinking process
step by step, and provide a detailed recommendation with clear justification
for each decision point."""
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=tool_adapters,
system_message="You are a helpful assistant.",
)

await Console(
agent.run_stream(task=complex_task, cancellation_token=CancellationToken())
)

if __name__ == "__main__":
# Run the async main function
asyncio.run(main())
45 changes: 45 additions & 0 deletions examples/with_langchain.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
"""
Example of using MCPHub with LangChain Agents.
1. Initialize MCPHub to manage MCP servers
2. Fetch MCP tools for LangChain
3. Create and run an agent with MCP tools
"""

import asyncio
import json

from langchain_mcp_adapters.tools import load_mcp_tools
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from mcphub import MCPHub

model = ChatOpenAI(model="gpt-4o")

async def main():
# Initialize MCPHub - automatically loads .mcphub.json and sets up servers
hub = MCPHub()

# Fetch MCP tools for LangChain
tools = await hub.fetch_langchain_mcp_tools("azure-storage-mcp")
tools_dict = [
{"name": tool.name, "description": tool.description, "args_schema": tool.args_schema} for tool in tools
]
print("Available MCP Tools:")
print(json.dumps(tools_dict, indent=2))

# Create and run agent with MCP tools
complex_task = """Please help me analyze the following complex problem:
We need to design a new feature for our product that balances user privacy
with data collection for improving the service. Consider the ethical implications,
technical feasibility, and business impact. Break down your thinking process
step by step, and provide a detailed recommendation with clear justification
for each decision point."""
agent = create_react_agent(model, tools)
agent_response = await agent.ainvoke({"messages": complex_task})
print("\nAgent Response:")
print(agent_response.get("messages")[1].content)

if __name__ == "__main__":
asyncio.run(main())
44 changes: 14 additions & 30 deletions examples/test.py → examples/with_openai.py
Original file line number Diff line number Diff line change
@@ -1,67 +1,51 @@
"""
Example of using MCPHub with OpenAI Agents.
1. Initialize MCPHub to manage MCP servers
2. Fetch an MCP server with async context manager
3. List available tools from the server
4. Create and run an agent with MCP tools
"""

import asyncio
import json
from agents import Agent, Runner
from mcphub import MCPHub

async def main():
"""
Example of using MCPHub to integrate MCP servers with OpenAI Agents.

This example demonstrates:
1. Initializing MCPHub
2. Fetching and using an MCP server
3. Listing available tools
4. Creating and running an agent with MCP tools
"""

# Step 1: Initialize MCPHub
# MCPHub will automatically:
# - Find .mcphub.json in your project
# - Load server configurations
# - Set up servers (clone repos, run setup scripts if needed)
# Initialize MCPHub - automatically loads .mcphub.json and sets up servers
hub = MCPHub()

# Step 2: Create an MCP server instance using async context manager
# Parameters:
# - mcp_name: The name of the server from your .mcphub.json
# - cache_tools_list: Cache the tools list for better performance
# Fetch MCP server - handles server setup and tool caching
async with hub.fetch_openai_mcp_server(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
) as server:
# Step 3: List available tools from the MCP server
# This shows what capabilities are available to your agent
# Get available tools from the server
tools = await server.list_tools()

# Pretty print the tools for better readability
tools_dict = [
dict(tool) if hasattr(tool, "__dict__") else tool for tool in tools
]
print("Available MCP Tools:")
print(json.dumps(tools_dict, indent=2))

# Step 4: Create an OpenAI Agent with MCP server
# The agent can now use all tools provided by the MCP server
# Create agent with MCP server integration
agent = Agent(
name="Assistant",
instructions="Use the available tools to accomplish the given task",
mcp_servers=[server] # Provide the MCP server to the agent
mcp_servers=[server]
)

# Step 5: Run your agent with a complex task
# The agent will automatically have access to all MCP tools
# Run agent with a task
complex_task = """Please help me analyze the following complex problem:
We need to design a new feature for our product that balances user privacy
with data collection for improving the service. Consider the ethical implications,
technical feasibility, and business impact. Break down your thinking process
step by step, and provide a detailed recommendation with clear justification
for each decision point."""

# Execute the task and get the result
result = await Runner.run(agent, complex_task)
print("\nAgent Response:")
print(result)

if __name__ == "__main__":
# Run the async main function
asyncio.run(main())
7 changes: 4 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "hatchling.build"

[project]
name = "mcphub"
version = "0.1.8"
version = "0.1.9"
description = "A Python package for managing and integrating Model Context Protocol (MCP) servers with AI frameworks like OpenAI Agents, LangChain, and Autogen"
readme = "README.md"
authors = [
Expand All @@ -21,7 +21,7 @@ classifiers = [
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]
dependencies = []
dependencies = ["pydantic (>=2.11.4,<3.0.0)", "rich (>=14.0.0,<15.0.0)", "openai (>=1.78.0,<2.0.0)", "psutil (>=7.0.0,<8.0.0)"]
requires-python = "<4.0,>=3.10"

[project.optional-dependencies]
Expand Down Expand Up @@ -54,4 +54,5 @@ mcphub = "mcphub.cli.commands:main"

[tool.poetry.group.dev.dependencies]
pytest = "^8.3.5"
pytest-asyncio = "^0.26.0"
pytest-asyncio = "^0.26.0"
pytest-cov = "^6.1.1"
12 changes: 10 additions & 2 deletions src/mcphub/adapters/autogen.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,25 @@
try:
from typing import List

from autogen_ext.tools.mcp import StdioMcpToolAdapter
from autogen_ext.tools.mcp import StdioMcpToolAdapter, StdioServerParams

from .base import MCPBaseAdapter

class MCPAutogenAdapter(MCPBaseAdapter):
async def create_adapters(self, mcp_name: str) -> List[StdioMcpToolAdapter]:
server_params = self.get_server_params(mcp_name)

autogen_mcp_server_params = StdioServerParams(
command=server_params.command,
args=server_params.args,
env=server_params.env,
cwd=server_params.cwd
)

async with self.create_session(mcp_name) as session:
tools = await session.list_tools()
return [
await StdioMcpToolAdapter.from_server_params(server_params, tool.name)
await StdioMcpToolAdapter.from_server_params(autogen_mcp_server_params, tool.name)
for tool in tools.tools
]

Expand Down
Loading