A comprehensive learning repository for Model Context Protocol (MCP) concepts, demonstrating practical implementations from simple tools to complex agentic workflows.
This repository helps you understand MCP by building progressively complex examples:
- Basic MCP Concepts - Servers, Tools, Resources, Prompts
- Tool Development - Input validation, error handling, structured logging
- Multi-Tool Integration - Combining tools in a single server
- LLM Integration - Local models via Ollama (no API keys required)
- Agentic Workflows - Orchestrating tools for complex tasks
- Python 3.11+
- Ollama installed and running
- At least one model available (e.g.,
qwen3:8b
,llama3.2:latest
)
# Clone and setup
git clone <repository-url>
cd mcp-learn
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
# Configure environment (optional)
cp .env.example .env
# Edit .env to customize Ollama settings
# Check Ollama is running
ollama list
# Test basic server
python src/examples/simple-server/main.py
# Clone and navigate to Java implementation
git clone <repository-url>
cd mcp-learn/java
# Configure environment (optional)
cp env.example .env
# Edit .env to customize settings
# Build the project
./gradlew build
# Run simple server example
./gradlew :examples:simple-server:bootRun
# Or run specific examples
./gradlew :examples:multi-tool:bootRun
./gradlew :examples:multi-llm:bootRun
./gradlew :examples:agentic:bootRun
Goal: Understand basic MCP server and tool concepts
# Simple echo server - demonstrates basic MCP patterns
python src/examples/simple-server/main.py
Key Concepts: Server initialization, tool registration, input validation, structured logging
Goal: Learn to combine multiple tools in one server
# Multi-tool server - combines echo + math tools
python src/examples/multi-tool/main.py
Key Concepts: Tool composition, validation patterns, error handling
Goal: Integrate local LLM models for text generation
# Multi-LLM server - adds Ollama integration
python src/examples/multi-llm/main.py
Key Concepts: LLM clients, model selection, timeout handling, prompt engineering
Goal: Build multi-step workflows that chain tools together
# Agentic workflow server - orchestrates complex tasks
python src/examples/agentic/main.py
Key Concepts: Workflow orchestration, prompt templates, tool chaining, state management
mcp-learn/
βββ src/mcp/ # Core MCP implementation
β βββ lib/ # Shared utilities
β β βββ config.py # Environment configuration
β β βββ logging.py # Structured logging setup
β β βββ validation.py # Input validation helpers
β βββ schemas/ # Pydantic models
β β βββ __init__.py # Tool input/output schemas
β βββ servers/ # MCP server implementations
β β βββ base.py # Base server class
β β βββ agents/ # Agentic workflow components
β β βββ orchestrator.py
β βββ tools/ # Tool implementations
β β βββ echo.py # Simple echo tool
β β βββ math.py # Integer math operations
β β βββ llm_complete.py # LLM completion tool
β βββ services/ # External service integrations
β β βββ llms/ # LLM service clients
β β βββ ollama_client.py
β β βββ factory.py
β βββ prompts/ # Prompt templates
β βββ templates.py # Reusable prompt patterns
βββ src/examples/ # Runnable examples
β βββ simple-server/ # Basic MCP server
β βββ multi-tool/ # Multi-tool integration
β βββ multi-llm/ # LLM integration
β βββ agentic/ # Agentic workflows
βββ specs/001-learn-mcp/ # Specification documents
β βββ contracts/ # API contracts and schemas
β βββ plan.md # Implementation plan
β βββ tasks.md # Detailed task breakdown
βββ requirements.txt # Python dependencies
Tool | Purpose | Input | Output | Example |
---|---|---|---|---|
echo |
Text echoing with validation | text: str |
Echoed text | Simple deterministic behavior |
sum_ints |
Integer addition with range checks | a: int, b: int |
Sum result | Math operations with validation |
llm_complete |
LLM text generation | prompt: str, model?: str |
Generated text | AI-powered responses |
# Install Ollama (if not already installed)
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
ollama serve
# Pull required models
ollama pull qwen3:8b
ollama pull llama3.2:latest
Environment variables (.env
file):
# Ollama settings
OLLAMA_HOST=http://127.0.0.1:11434
OLLAMA_MODEL=qwen3:8b
# Logging and timeouts
LOG_LEVEL=INFO
REQUEST_TIMEOUT=30
LLM_TIMEOUT=60
workflow = [
WorkflowStep("plan", "llm_complete",
{"prompt": "Create a plan for: {task}"},
"Generate task plan"),
WorkflowStep("execute", "sum_ints",
{"a": 42, "b": 8},
"Perform calculation"),
WorkflowStep("summarize", "llm_complete",
{"prompt": "Summarize results for: {task}"},
"Create summary")
]
workflow = [
WorkflowStep("start", "echo", {"text": "Starting workflow"}, "Begin"),
WorkflowStep("calculate", "sum_ints", {"a": 123, "b": 456}, "Math"),
WorkflowStep("explain", "llm_complete", {"prompt": "Explain 123+456"}, "AI Analysis"),
WorkflowStep("finish", "echo", {"text": "Workflow complete"}, "End")
]
All components use structured JSON logging:
{
"timestamp": "2025-01-01T12:00:00Z",
"level": "INFO",
"logger": "mcp-server",
"message": "Tool call successful",
"tool": "sum_ints",
"inputs": {"a": 5, "b": 3},
"outputs": {"success": true, "sum": 8}
}
Key Log Events:
server_init
- Server startuptool_registered
- Tool registrationworkflow_start
- Workflow execution beginsworkflow_step_success
- Individual step completionllm_complete_success
- LLM generation complete
Each example includes comprehensive demonstrations:
# Test individual components
python src/examples/simple-server/main.py # Basic functionality
python src/examples/multi-tool/main.py # Tool integration
python src/examples/multi-llm/main.py # LLM integration
python src/examples/agentic/main.py # Workflow orchestration
- Input validation via Pydantic schemas
- Range checking for numeric inputs
- Timeout protection for LLM calls
- Error resilience in workflows
- Type safety throughout the codebase
- API Contracts - Detailed tool and workflow specifications
- Implementation Plan - Technical architecture and decisions
- Task Breakdown - Development task details
After working through this repository, you'll understand:
- MCP Architecture - How servers, tools, resources, and prompts work together
- Tool Development - Creating robust, validated, observable tools
- LLM Integration - Connecting local models without external dependencies
- Workflow Orchestration - Building complex multi-step processes
- Production Patterns - Logging, error handling, timeout management
- Code Quality - Validation, type safety, structured design
- Extend Tools - Add file operations, API calls, database interactions
- Advanced Workflows - Implement conditional branching, parallel execution
- Resource Management - Add MCP resources for data access
- Production Deployment - Add monitoring, metrics, health checks
- Integration - Connect to external systems and APIs
This is a learning repository. Feel free to:
- Experiment with the examples
- Add new tools and workflows
- Improve documentation
- Share learning insights
MIT License - See LICENSE file for details.
Happy Learning! π Start with the simple server example and work your way up to complex agentic workflows.