A production-ready MCP server exposing LangChain agent capabilities through the Model Context Protocol, deployed on Google Cloud Run.
This is a standalone backend service that wraps a LangChain agent as a single, high-level MCP Tool. The server is built with FastAPI and deployed on Google Cloud Run, providing a scalable, production-ready solution for exposing AI agent capabilities to any MCP-compliant client.
Live Service: https://langchain-agent-mcp-server-554655392699.us-central1.run.app
- β MCP Compliance - Full Model Context Protocol support
- β LangChain Agent - Multi-step reasoning with ReAct pattern
- β Google Cloud Run - Scalable, serverless deployment
- β Tool Support - Extensible framework for custom tools
- β Production Ready - Error handling, logging, and monitoring
- β Docker Support - Containerized for easy deployment
| Component | Technology | Purpose |
|---|---|---|
| Backend Framework | FastAPI | High-performance, asynchronous web server |
| Agent Framework | LangChain | Multi-step reasoning and tool execution |
| Deployment | Google Cloud Run | Serverless, auto-scaling hosting |
| Containerization | Docker | Consistent deployment environment |
| Protocol | Model Context Protocol (MCP) | Standardized tool and context sharing |
- Python 3.11+
- OpenAI API key
- Google Cloud account (for Cloud Run deployment)
- Docker (optional, for local testing)
-
Clone the repository:
git clone https://github.com/mcpmessenger/LangchainMCP.git cd LangchainMCP -
Install dependencies:
# Windows py -m pip install -r requirements.txt # Linux/Mac pip install -r requirements.txt
-
Set up environment variables: Create a
.envfile:OPENAI_API_KEY=your-openai-api-key-here OPENAI_MODEL=gpt-4o-mini PORT=8000
-
Run the server:
# Windows py run_server.py # Linux/Mac python run_server.py
-
Test the endpoints:
- Health: http://localhost:8000/health
- Manifest: http://localhost:8000/mcp/manifest
- API Docs: http://localhost:8000/docs
The server is designed for deployment on Google Cloud Run. See our comprehensive deployment guides:
- DEPLOY_CLOUD_RUN_WINDOWS.md - Windows deployment guide
- DEPLOY_CLOUD_RUN.md - General deployment guide
- QUICK_DEPLOY.md - Quick reference
# Windows PowerShell
.\deploy-cloud-run.ps1 -ProjectId "your-project-id" -Region "us-central1"
# Linux/Mac
./deploy-cloud-run.sh your-project-id us-central1- Service URL: https://langchain-agent-mcp-server-554655392699.us-central1.run.app
- Project: slashmcp
- Region: us-central1
- Status: β Live and operational
GET /mcp/manifestReturns the MCP manifest declaring available tools.
Response:
{
"name": "langchain-agent-mcp-server",
"version": "1.0.0",
"tools": [
{
"name": "agent_executor",
"description": "Execute a complex, multi-step reasoning task...",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The user's query or task"
}
},
"required": ["query"]
}
}
]
}POST /mcp/invoke
Content-Type: application/json
{
"tool": "agent_executor",
"arguments": {
"query": "What is the capital of France?"
}
}Response:
{
"content": [
{
"type": "text",
"text": "The capital of France is Paris."
}
],
"isError": false
}GET /- Server informationGET /health- Health checkGET /docs- Interactive API documentation (Swagger UI)
| Variable | Description | Default | Required |
|---|---|---|---|
OPENAI_API_KEY |
OpenAI API key | - | β Yes |
OPENAI_MODEL |
OpenAI model to use | gpt-4o-mini |
No |
PORT |
Server port | 8000 |
No |
API_KEY |
Optional API key for authentication | - | No |
MAX_ITERATIONS |
Maximum agent iterations | 10 |
No |
VERBOSE |
Enable verbose logging | false |
No |
π Full Documentation Site - Complete documentation with examples (GitHub Pages)
Quick Links:
- Getting Started - Set up and run locally
- Examples - Code examples including "Build a RAG agent in 10 lines"
- Deployment Guide - Deploy to Google Cloud Run
- API Reference - Complete API documentation
- Troubleshooting - Common issues and solutions
Build Docs Locally:
# Windows
.\build-docs.ps1 serve
# Linux/Mac
./build-docs.sh serveAdditional Guides:
- README_BACKEND.md - Complete technical documentation
- DEPLOY_CLOUD_RUN_WINDOWS.md - Windows deployment guide
- INSTALL_PREREQUISITES.md - Prerequisites installation
- SLASHMCP_INTEGRATION.md - SlashMCP integration guide
# Test health endpoint
Invoke-WebRequest -Uri "https://langchain-agent-mcp-server-554655392699.us-central1.run.app/health"
# Test agent invocation
$body = @{
tool = "agent_executor"
arguments = @{
query = "What is 2+2?"
}
} | ConvertTo-Json
Invoke-WebRequest -Uri "https://langchain-agent-mcp-server-554655392699.us-central1.run.app/mcp/invoke" `
-Method POST `
-ContentType "application/json" `
-Body $body.
βββ src/
β βββ main.py # FastAPI application with MCP endpoints
β βββ agent.py # LangChain agent definition and tools
β βββ mcp_manifest.json # MCP manifest configuration
β βββ start.sh # Cloud Run startup script
βββ tests/
β βββ test_mcp_endpoints.py # Test suite
βββ Dockerfile # Container configuration
βββ requirements.txt # Python dependencies
βββ deploy-cloud-run.ps1 # Windows deployment script
βββ deploy-cloud-run.sh # Linux/Mac deployment script
βββ cloudbuild.yaml # Cloud Build configuration
- Scalable - Auto-scales based on traffic
- Serverless - Pay only for what you use
- Managed - No infrastructure to manage
- Fast - Low latency with global CDN
See DEPLOY_CLOUD_RUN_WINDOWS.md for detailed instructions.
docker build -t langchain-agent-mcp-server .
docker run -p 8000:8000 -e OPENAI_API_KEY=your-key langchain-agent-mcp-server- P95 Latency: < 5 seconds for standard 3-step ReAct chains
- Scalability: Horizontal scaling on Cloud Run
- Uptime: 99.9% target (Cloud Run SLA)
- Throughput: Handles concurrent requests efficiently
- API key authentication (optional)
- Environment variable management
- Secret Manager integration (Cloud Run)
- HTTPS by default (Cloud Run)
- CORS configuration
We welcome contributions! Please see our contributing guidelines.
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
This project is licensed under the MIT License.
- GitHub Repository: https://github.com/mcpmessenger/LangchainMCP
- Live Service: https://langchain-agent-mcp-server-554655392699.us-central1.run.app
- API Documentation: https://langchain-agent-mcp-server-554655392699.us-central1.run.app/docs
- Model Context Protocol: https://modelcontextprotocol.io/
- Built with LangChain
- Deployed on Google Cloud Run
- Uses FastAPI for the web framework
Status: β Production-ready and deployed on Google Cloud Run