Skip to content

A comprehensive learning repository for Model Context Protocol (MCP) - from simple tools to complex agentic workflows using local Ollama models

Notifications You must be signed in to change notification settings

vsrak22/mcp-learn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MCP Learning Repository

A comprehensive learning repository for Model Context Protocol (MCP) concepts, demonstrating practical implementations from simple tools to complex agentic workflows.

🎯 Learning Objectives

This repository helps you understand MCP by building progressively complex examples:

  1. Basic MCP Concepts - Servers, Tools, Resources, Prompts
  2. Tool Development - Input validation, error handling, structured logging
  3. Multi-Tool Integration - Combining tools in a single server
  4. LLM Integration - Local models via Ollama (no API keys required)
  5. Agentic Workflows - Orchestrating tools for complex tasks

πŸš€ Quick Start

Python Implementation

Prerequisites

  • Python 3.11+
  • Ollama installed and running
  • At least one model available (e.g., qwen3:8b, llama3.2:latest)

Setup

# Clone and setup
git clone <repository-url>
cd mcp-learn
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
pip install -r requirements.txt

# Configure environment (optional)
cp .env.example .env
# Edit .env to customize Ollama settings

Verify Setup

# Check Ollama is running
ollama list

# Test basic server
python src/examples/simple-server/main.py

Java Implementation

Prerequisites

  • Java 21 (LTS)
  • Gradle 8.0+ or build tool of choice
  • Ollama installed and running (for LLM examples)

Setup

# Clone and navigate to Java implementation
git clone <repository-url>
cd mcp-learn/java

# Configure environment (optional)
cp env.example .env
# Edit .env to customize settings

# Build the project
./gradlew build

Verify Setup

# Run simple server example
./gradlew :examples:simple-server:bootRun

# Or run specific examples
./gradlew :examples:multi-tool:bootRun
./gradlew :examples:multi-llm:bootRun
./gradlew :examples:agentic:bootRun

πŸ“š Learning Path

Phase 1: Foundations

Goal: Understand basic MCP server and tool concepts

# Simple echo server - demonstrates basic MCP patterns
python src/examples/simple-server/main.py

Key Concepts: Server initialization, tool registration, input validation, structured logging

Phase 2: Multi-Tool Integration

Goal: Learn to combine multiple tools in one server

# Multi-tool server - combines echo + math tools
python src/examples/multi-tool/main.py

Key Concepts: Tool composition, validation patterns, error handling

Phase 3: LLM Integration

Goal: Integrate local LLM models for text generation

# Multi-LLM server - adds Ollama integration
python src/examples/multi-llm/main.py

Key Concepts: LLM clients, model selection, timeout handling, prompt engineering

Phase 4: Agentic Workflows

Goal: Build multi-step workflows that chain tools together

# Agentic workflow server - orchestrates complex tasks
python src/examples/agentic/main.py

Key Concepts: Workflow orchestration, prompt templates, tool chaining, state management

πŸ› οΈ Project Structure

mcp-learn/
β”œβ”€β”€ src/mcp/                    # Core MCP implementation
β”‚   β”œβ”€β”€ lib/                    # Shared utilities
β”‚   β”‚   β”œβ”€β”€ config.py          # Environment configuration
β”‚   β”‚   β”œβ”€β”€ logging.py         # Structured logging setup
β”‚   β”‚   └── validation.py      # Input validation helpers
β”‚   β”œβ”€β”€ schemas/               # Pydantic models
β”‚   β”‚   └── __init__.py        # Tool input/output schemas
β”‚   β”œβ”€β”€ servers/               # MCP server implementations
β”‚   β”‚   β”œβ”€β”€ base.py            # Base server class
β”‚   β”‚   └── agents/            # Agentic workflow components
β”‚   β”‚       └── orchestrator.py
β”‚   β”œβ”€β”€ tools/                 # Tool implementations
β”‚   β”‚   β”œβ”€β”€ echo.py            # Simple echo tool
β”‚   β”‚   β”œβ”€β”€ math.py            # Integer math operations
β”‚   β”‚   └── llm_complete.py    # LLM completion tool
β”‚   β”œβ”€β”€ services/              # External service integrations
β”‚   β”‚   └── llms/              # LLM service clients
β”‚   β”‚       β”œβ”€β”€ ollama_client.py
β”‚   β”‚       └── factory.py
β”‚   └── prompts/               # Prompt templates
β”‚       └── templates.py       # Reusable prompt patterns
β”œβ”€β”€ src/examples/              # Runnable examples
β”‚   β”œβ”€β”€ simple-server/         # Basic MCP server
β”‚   β”œβ”€β”€ multi-tool/           # Multi-tool integration
β”‚   β”œβ”€β”€ multi-llm/            # LLM integration
β”‚   └── agentic/              # Agentic workflows
β”œβ”€β”€ specs/001-learn-mcp/      # Specification documents
β”‚   β”œβ”€β”€ contracts/            # API contracts and schemas
β”‚   β”œβ”€β”€ plan.md              # Implementation plan
β”‚   └── tasks.md             # Detailed task breakdown
└── requirements.txt          # Python dependencies

πŸ”§ Available Tools

Tool Purpose Input Output Example
echo Text echoing with validation text: str Echoed text Simple deterministic behavior
sum_ints Integer addition with range checks a: int, b: int Sum result Math operations with validation
llm_complete LLM text generation prompt: str, model?: str Generated text AI-powered responses

πŸ€– LLM Integration

Ollama Setup

# Install Ollama (if not already installed)
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama service
ollama serve

# Pull required models
ollama pull qwen3:8b
ollama pull llama3.2:latest

Configuration

Environment variables (.env file):

# Ollama settings
OLLAMA_HOST=http://127.0.0.1:11434
OLLAMA_MODEL=qwen3:8b

# Logging and timeouts
LOG_LEVEL=INFO
REQUEST_TIMEOUT=30
LLM_TIMEOUT=60

πŸ”„ Workflow Patterns

Simple Pattern: Plan β†’ Execute β†’ Summarize

workflow = [
    WorkflowStep("plan", "llm_complete", 
                {"prompt": "Create a plan for: {task}"}, 
                "Generate task plan"),
    WorkflowStep("execute", "sum_ints", 
                {"a": 42, "b": 8}, 
                "Perform calculation"),
    WorkflowStep("summarize", "llm_complete", 
                {"prompt": "Summarize results for: {task}"}, 
                "Create summary")
]

Multi-Tool Chain Pattern

workflow = [
    WorkflowStep("start", "echo", {"text": "Starting workflow"}, "Begin"),
    WorkflowStep("calculate", "sum_ints", {"a": 123, "b": 456}, "Math"),
    WorkflowStep("explain", "llm_complete", {"prompt": "Explain 123+456"}, "AI Analysis"),
    WorkflowStep("finish", "echo", {"text": "Workflow complete"}, "End")
]

πŸ“Š Logging and Observability

All components use structured JSON logging:

{
  "timestamp": "2025-01-01T12:00:00Z",
  "level": "INFO", 
  "logger": "mcp-server",
  "message": "Tool call successful",
  "tool": "sum_ints",
  "inputs": {"a": 5, "b": 3},
  "outputs": {"success": true, "sum": 8}
}

Key Log Events:

  • server_init - Server startup
  • tool_registered - Tool registration
  • workflow_start - Workflow execution begins
  • workflow_step_success - Individual step completion
  • llm_complete_success - LLM generation complete

πŸ§ͺ Testing and Validation

Manual Testing

Each example includes comprehensive demonstrations:

# Test individual components
python src/examples/simple-server/main.py    # Basic functionality
python src/examples/multi-tool/main.py       # Tool integration  
python src/examples/multi-llm/main.py        # LLM integration
python src/examples/agentic/main.py          # Workflow orchestration

Validation Features

  • Input validation via Pydantic schemas
  • Range checking for numeric inputs
  • Timeout protection for LLM calls
  • Error resilience in workflows
  • Type safety throughout the codebase

πŸ“– Documentation

πŸŽ“ Key Learning Outcomes

After working through this repository, you'll understand:

  1. MCP Architecture - How servers, tools, resources, and prompts work together
  2. Tool Development - Creating robust, validated, observable tools
  3. LLM Integration - Connecting local models without external dependencies
  4. Workflow Orchestration - Building complex multi-step processes
  5. Production Patterns - Logging, error handling, timeout management
  6. Code Quality - Validation, type safety, structured design

πŸ”— Next Steps

  • Extend Tools - Add file operations, API calls, database interactions
  • Advanced Workflows - Implement conditional branching, parallel execution
  • Resource Management - Add MCP resources for data access
  • Production Deployment - Add monitoring, metrics, health checks
  • Integration - Connect to external systems and APIs

🀝 Contributing

This is a learning repository. Feel free to:

  • Experiment with the examples
  • Add new tools and workflows
  • Improve documentation
  • Share learning insights

πŸ“„ License

MIT License - See LICENSE file for details.


Happy Learning! πŸš€ Start with the simple server example and work your way up to complex agentic workflows.

About

A comprehensive learning repository for Model Context Protocol (MCP) - from simple tools to complex agentic workflows using local Ollama models

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published