A production-ready MCP (Model Context Protocol) server built with TypeScript featuring both basic tools and advanced LLM-powered capabilities with type-safe provider and model selection and dual-mode operation (STDIO + Streamable HTTP with OAuth).
Create a production-ready MCP server in under 2 minutes:
npm create @mcp-typescript-simple@latest my-mcp-server
cd my-mcp-server
cp .env.example .env
npm run dev:stdioWhat you get:
- β Full-featured MCP server (OAuth, LLM, Docker)
- β Graceful degradation (works without API keys)
- β Production-ready testing (unit + system tests)
- β Docker deployment (nginx + Redis + multi-replica)
- β Validation pipeline (vibe-validate)
Adding API keys:
Edit .env to add your provider keys (all optional):
ANTHROPIC_API_KEY- Claude LLM toolsOPENAI_API_KEY- GPT LLM toolsGOOGLE_API_KEY- Gemini LLM toolsGOOGLE_CLIENT_ID/SECRET- Google OAuthGITHUB_CLIENT_ID/SECRET- GitHub OAuthMICROSOFT_CLIENT_ID/SECRET- Microsoft OAuth
See Getting Started Guide for full documentation.
- Provider Selection: Choose between Claude, OpenAI, and Gemini with compile-time validation
- Model Selection: Select specific models per provider with type safety
- Intelligent Defaults: Each tool optimized for specific provider/model combinations
- Runtime Flexibility: Override provider/model per request
- Backward Compatibility: Existing code continues to work unchanged
- STDIO Mode: Traditional stdin/stdout for development and Claude Desktop
- Streamable HTTP Mode: HTTP endpoints with streaming support for web applications
- OAuth Authentication: Secure Google/GitHub/Microsoft OAuth integration for production
- Dynamic Client Registration: RFC 7591 compliant OAuth DCR for automatic client registration
- Development Bypass: Easy auth bypass for local development
- Claude Code Ready: Full compatibility with Claude Code integration
- Structured Logging: Pino-based high-performance logging with environment-aware configuration
- OpenTelemetry Integration: Distributed tracing and metrics collection
- Session Correlation: Secure UUID-based session tracking for distributed systems
- Local Development: Grafana OTEL-LGTM stack for observability validation
- Cross-Platform: Compatible with Express.js, Kubernetes, and Vercel deployments
This project provides a containerized MCP server with comprehensive CI/CD testing and multi-LLM support:
- hello: Greets a person by name
- echo: Echoes back a provided message
- current-time: Returns the current timestamp
- chat: Interactive AI assistant using Claude Haiku (fast responses)
- analyze: Deep text analysis using GPT-4 (sentiment, themes, structure)
- summarize: Text summarization using Gemini Flash (cost-effective)
- explain: Educational explanations using Claude (clear, adaptive to level)
Note: LLM tools require API keys. The server gracefully runs with basic tools only if no API keys are configured.
- Node.js 22+ (Current LTS)
- Docker (via Colima on macOS)
- Optional: API keys for LLM providers (Anthropic, OpenAI, Google)
Copy the example environment file and add your API keys:
cp .env.example .env
# Edit .env with your API keysAPI Key Sources:
- Anthropic Claude: https://console.anthropic.com/
- OpenAI: https://platform.openai.com/api-keys
- Google Gemini: https://ai.google.dev/
Tip: You can use any combination of providers. The server will automatically detect available APIs and enable corresponding tools.
# Install dependencies
npm install
# STDIO Mode (traditional MCP - recommended for development)
npm run dev:stdio
# Streamable HTTP Mode (for web development - no auth)
npm run dev:http
# Streamable HTTP Mode (with OAuth - requires Google credentials)
npm run dev:oauth
# Vercel Serverless Development (test as serverless functions)
npm run dev:vercel
# Build the project
npm run build
# Production STDIO mode
npm start
# Type checking
npm run typecheck
# Linting
npm run lint
# Unit tests with coverage
npm run test:unit
# Integration / CI suite
npm run test:integration
# Test dual-mode functionality
npm run test:dual-mode
# Observability features
npm run otel:start # Start Grafana OTEL-LGTM stack (requires Docker)
npm run otel:stop # Stop observability stack
npm run otel:logs # View observability stack logs
# API Documentation
npm run docs:validate # Validate OpenAPI specification
npm run docs:preview # Preview docs locally with Redocly
npm run docs:build # Build static Redoc HTML
npm run docs:bundle # Bundle OpenAPI spec to JSONThis project includes comprehensive OpenAPI 3.1 specification and interactive documentation. When running the server (development or production), access:
- π
/docs- Beautiful read-focused documentation (Redoc) - π§ͺ
/api-docs- Interactive API testing with OAuth support (Swagger UI) - π
/openapi.yaml- OpenAPI specification (YAML format) - π
/openapi.json- OpenAPI specification (JSON format)
Example: Start the server and visit http://localhost:3000/docs
npm run dev:http
# Open http://localhost:3000/docs in your browserThe documentation includes:
- All HTTP endpoints with request/response examples
- OAuth 2.0 authentication flows with interactive testing
- JSON-RPC 2.0 MCP protocol endpoints
- Dynamic Client Registration (RFC 7591/7592)
- OAuth Discovery metadata (RFC 8414/9728)
- Admin and monitoring endpoints
- π Traditional Development: Use the commands above for STDIO/Streamable HTTP modes
- π οΈ Vercel Local Development - Complete guide for developing with Vercel locally
- ποΈ System Architecture - Detailed architecture overview with diagrams
- π Dual-Mode Operation Guide - Understanding STDIO and HTTP transport modes
- π OAuth Setup Guide - Configure OAuth authentication
- π OAuth Dynamic Client Registration - Automatic client registration (RFC 7591)
- π Observability Setup - Structured logging and OpenTelemetry integration
# Build production Docker image
npm run run:docker:build
# Run Docker container with multi-provider OAuth
npm run run:docker # Uses .env.oauth (supports Google, GitHub, Microsoft)
# Manual Docker commands
docker build -t mcp-typescript-simple:latest .
docker run --rm -p 3000:3000 --env-file .env.oauth mcp-typescript-simple:latestFor production-grade horizontal scaling with Redis session persistence:
# Start 3 MCP servers + Redis + Nginx load balancer
docker-compose --profile loadbalanced up -d
# Test the load-balanced deployment
curl http://localhost:8080/healthFeatures:
- 3 MCP server instances with round-robin load balancing
- Redis-backed session persistence and recovery
- Session handoff across instances
- OpenTelemetry observability integration
π Multi-Node Deployment Guide - Complete guide for horizontally scaled deployment with testing instructions
Test with increasing production-like fidelity:
Level 1: Development Mode (TypeScript via tsx)
npm run dev:oauthLevel 2: Docker Container
npm run run:docker:build
npm run run:docker:googleLevel 3: Vercel Serverless Automatic deployment via GitHub Actions on PR merge to main.
src/
βββ index.ts # Main MCP server implementation
βββ auth/ # OAuth authentication system
βββ config/ # Environment and configuration management
βββ llm/ # Multi-LLM provider integration
βββ observability/ # Structured logging and OpenTelemetry integration
βββ server/ # HTTP and MCP server implementations
βββ session/ # Session management
βββ tools/ # MCP tool implementations
βββ transport/ # Transport layer abstractions
api/ # Vercel serverless functions
βββ mcp.ts # Main MCP protocol endpoint
βββ health.ts # Health check and status
βββ auth.ts # OAuth authentication endpoints
βββ admin.ts # Administration and metrics
test/
βββ unit/ # Unit tests (mirrors src/ structure)
βββ integration/ # Integration tests
βββ system/ # End-to-end system tests
tools/ # Manual development and testing utilities
βββ interactive-client.ts # Interactive MCP client
βββ remote-http-client.ts # Remote HTTP MCP client
βββ test-oauth.ts # OAuth flow testing
βββ manual/ # Manual testing scripts
docs/ # Deployment and architecture documentation
examples/ # Usage examples and demonstrations
.github/workflows/ # GitHub Actions CI/CD pipeline
build/ # Compiled TypeScript output
Dockerfile # Container configuration
vercel.json # Vercel deployment configuration
This project uses a comprehensive testing approach with multiple layers:
- Unit Tests: Individual component testing (
test/unit/) - Integration Tests: Component interaction testing (
test/integration/) - System Tests: End-to-end deployment validation (
test/system/)
End-to-end system tests validate the complete deployed application. See test/system/README.md for detailed documentation.
# Run against local development server
npm run test:system:local
# Run against Docker container
npm run test:system:docker
# Run against Vercel preview deployment
npm run test:system:preview
# Run against production deployment
npm run test:system:productionSystem tests cover:
- Health & Configuration: Deployment validation and environment detection
- Authentication: OAuth provider configuration and security
- MCP Protocol: JSON-RPC compliance and tool discovery
- Tool Functionality: Basic tools and LLM integration testing
Primary command for GitHub Actions and automated testing:
# Complete regression test suite - USE THIS FOR CI/CD
npm run test:ci
# Alternative: Full validation including build
npm run validateThe test:ci command runs:
- TypeScript compilation
- Type checking
- Code linting
- MCP server startup validation
- MCP protocol compliance
- All tool functionality tests
- Error handling verification
- Docker build test (if Docker available)
Unit tests live under test/unit/ (mirroring src/** paths, e.g. test/unit/config/environment.test.ts) and feed npm run test:unit; integration suites live under test/integration/ and are exercised by npm run test:integration.
# Individual test commands
npm run test:mcp # MCP-specific functionality tests
npm run test:interactive # Interactive client testing
npm run typecheck # TypeScript type validation
npm run lint # Code quality checks
npm run build # Compilation test# Run MCP protocol and tool tests
npx tsx tools/manual/test-mcp.tsLaunch an interactive client to test tools locally:
# Start interactive MCP client (STDIO mode)
npx tsx tools/interactive-client.tsConnect to remote MCP servers using Bearer token authentication:
# Basic usage
npx tsx tools/remote-http-client.ts --url http://localhost:3000/mcp --token your-bearer-token
# With verbose logging
npx tsx tools/remote-http-client.ts --url http://localhost:3000/mcp --token your-token --verbose
# Debug mode with full request/response logging
npx tsx tools/remote-http-client.ts --url http://localhost:3000/mcp --token your-token --debug
# Non-interactive mode (for scripting)
npx tsx tools/remote-http-client.ts --url http://localhost:3000/mcp --token your-token --no-interactiveRemote HTTP Client Features:
- π Bearer token authentication (bypasses OAuth flows)
- π Comprehensive logging with multiple debug levels
- π Error analysis with debugging hints and categorization
- π οΈ Interactive tool discovery and execution
- β±οΈ Request/response correlation and timing
- π Secure token display (automatic masking)
- π‘ Full MCP protocol compliance
- π Works with any remote HTTP MCP server
help- Show available commands and discovered toolslist- List all available tools dynamicallydescribe <tool>- Show detailed tool information with parameters<tool-name> <args>- Call any discovered tool directlycall <tool> <json-args>- Call tools with JSON argumentsraw <json>- Send raw JSON-RPC requestsdebug [on|off]- Toggle debug logging (HTTP client only)quit- Exit the client
The interactive client dynamically discovers all available MCP tools and provides context-aware help and parameter guidance.
For advanced testing with a graphical interface:
# Install MCP Inspector
npm install -g @modelcontextprotocol/inspector
# Launch with web interface
mcp-inspector npx tsx src/index.tsFor manual testing and development workflows, several utility scripts are available in the tools/ directory:
# Test OAuth flow interactively
node tools/test-oauth.js --flow
# Test server health
node tools/test-oauth.js
# Test with existing token
node tools/test-oauth.js --token <your_token># Start official Vercel development server
npm run dev:vercel
# Test MCP protocol compliance
npm run test:mcpThese tools help with:
- OAuth Flow Validation: Test authentication flows with real providers
- Local Vercel Testing: Mock Vercel environment for development
- API Function Testing: Direct testing of serverless functions
- MCP Protocol Debugging: Low-level MCP endpoint testing
The project includes a complete CI/CD pipeline in .github/workflows/ci.yml:
- Node.js 22 Testing: Standardized on current LTS
- Regression Testing: Runs
npm run test:cion every push/PR - Docker Validation: Builds and tests Docker image
- Deployment Ready: Provides deployment checkpoint
Pipeline triggers:
- Push to
mainordevelopbranches - Pull requests to
main
Greets a person by name.
- Input:
name(string, required) - Output: Greeting message
Echoes back the provided message.
- Input:
message(string, required) - Output: Echo of the input message
Returns the current timestamp.
- Input: None
- Output: ISO timestamp string
Type-Safe Provider & Model Selection: All LLM tools support optional
providerandmodelparameters for fine-grained control over which AI model to use.
Interactive AI assistant with flexible provider and model selection.
- Input:
message(string, required): Your message to the AIsystem_prompt(string, optional): Custom system instructionstemperature(number, optional): Creativity level 0-2 (default: 0.7)provider(enum, optional): 'claude' | 'openai' | 'gemini' (default: 'claude')model(string, optional): Specific model to use (must be valid for provider)
- Output: AI response
- Default: Claude Haiku (optimized for speed)
- Examples:
claude-3-haiku-20240307,claude-3-sonnet-20240229gpt-4,gpt-4o,gpt-4o-minigemini-1.5-flash,gemini-1.5-pro
Deep text analysis with configurable AI models.
- Input:
text(string, required): Text to analyzeanalysis_type(enum, optional): 'sentiment', 'themes', 'structure', 'comprehensive', 'summary'focus(string, optional): Specific aspect to focus onprovider(enum, optional): 'claude' | 'openai' | 'gemini' (default: 'openai')model(string, optional): Specific model to use
- Output: Detailed analysis based on type
- Default: OpenAI GPT-4 (optimized for reasoning)
Text summarization with cost-effective model options.
- Input:
text(string, required): Text to summarizelength(enum, optional): 'brief', 'medium', 'detailed'format(enum, optional): 'paragraph', 'bullets', 'outline'focus(string, optional): Specific aspect to focus onprovider(enum, optional): 'claude' | 'openai' | 'gemini' (default: 'gemini')model(string, optional): Specific model to use
- Output: Formatted summary
- Default: Gemini Flash (optimized for cost and speed)
Educational explanations with adaptive AI models.
- Input:
topic(string, required): Topic, concept, or code to explainlevel(enum, optional): 'beginner', 'intermediate', 'advanced'context(string, optional): Additional context or domaininclude_examples(boolean, optional): Include examples (default: true)provider(enum, optional): 'claude' | 'openai' | 'gemini' (default: 'claude')model(string, optional): Specific model to use
- Output: Clear explanation adapted to level
- Default: Claude Sonnet (optimized for clarity and detail)
- Flexible Provider Selection: Choose between Claude, OpenAI, and Gemini at runtime
- Model-Specific Optimization: Each provider supports multiple models with different capabilities
- Intelligent Defaults: Each tool has an optimized default provider/model combination
- Automatic Fallback: Graceful degradation if providers unavailable
- Type Safety: Compile-time validation prevents invalid provider/model combinations
- claude-3-haiku-20240307: Fast, cost-effective responses
- claude-3-sonnet-20240229: Balanced performance and capability
- claude-3-opus-20240229: Highest capability for complex tasks
- gpt-3.5-turbo: Fast, cost-effective general purpose
- gpt-4: High capability reasoning and analysis
- gpt-4-turbo: Enhanced performance with larger context
- gpt-4o: Optimized multimodal capabilities
- gpt-4o-mini: Efficient version of GPT-4o
- gemini-1.5-flash: Fast, cost-effective processing
- gemini-1.5-pro: High capability with large context
- gemini-1.0-pro: Standard performance model
{
"name": "chat",
"arguments": {
"message": "Hello, how are you?"
}
}{
"name": "analyze",
"arguments": {
"text": "Sample text to analyze",
"provider": "claude"
}
}{
"name": "chat",
"arguments": {
"message": "Complex question requiring deep reasoning",
"provider": "openai",
"model": "gpt-4"
}
}- Tiered Approach: Environment variables β File-based (.env) β Fallback
- Runtime Detection: Automatically detects available providers
- Secure Defaults: No hardcoded secrets, graceful failure modes
- Multi-Provider Support: Works with any combination of available API keys
Deploy the MCP server as Vercel serverless functions with full streaming support.
- Serverless Functions: Auto-scaling serverless endpoints
- Streamable HTTP: Full MCP streaming protocol support
- Multi-Provider OAuth: Google, GitHub, Microsoft authentication
- Built-in Monitoring: Health checks, metrics, and request logging
- Global CDN: Vercel's edge network for optimal performance
/api/mcp- MCP protocol endpoint/api/health- Health and status checks/api/auth- OAuth authentication flows/api/admin- Metrics and administration
- π Quick Start - 5-minute deployment
- π Complete Deployment Guide - Detailed deployment instructions
- π οΈ Local Development - Develop and test locally with Vercel
For traditional server deployment, use the standard Node.js build:
npm run build
npm start