MCP Eval is the first comprehensive testing platform for Model Context Protocol (MCP) servers. Whether you're building an MCP server or integrating with one, MCP Eval gives you instant insights into server capabilities, performance, and compatibility.
MCP Eval automatically verifies your server's compatibility with:
- ✅ ChatGPT - Ensure your MCP server works with OpenAI's ChatGPT
- ✅ Claude - Full compatibility testing for Anthropic's Claude
- ✅ Cursor - Validate integration with Cursor IDE
Feature | MCP Eval | MCP Inspector | Manual Testing |
---|---|---|---|
OAuth Support | ✅ Full OAuth 2.0 | ❌ Not supported | ❌ Manual tokens |
Intelligent Testing | ✅ Auto-generates test args | ❌ Empty args only | ❌ Manual input |
Real-time Streaming | ✅ Live progress updates | ❌ Wait for results | |
Tool Discovery | ✅ Tests all tools | ✅ Shows tools | ❌ One at a time |
Performance Metrics | ✅ Response times | ❌ No metrics | ❌ Not measured |
One-Click Testing | ✅ Just paste URL | ❌ Complex setup |
Visit mcpevals.ai → Paste MCP URL → Click "Evaluate"
git clone https://github.com/scorecard-ai/mcp-eval.git
cd mcp-eval/mcp-eval-site
npm install
npm run dev
Set environment variables in .env.local
:
NEXT_PUBLIC_APP_URL=http://localhost:3000
OPENAI_API_KEY=your-key-here
flowchart LR
A[Enter MCP URL] --> B{OAuth Required?}
B -->|Yes| C[OAuth Discovery]
B -->|No| D[Direct Connection]
C --> E[Dynamic Registration]
E --> F[User Authorization]
F --> G[Token Exchange]
G --> H[Test Tools & Resources]
D --> H
H --> I[Generate Test Args]
I --> J[Execute Tests]
J --> K[Real-time Results]
K --> L[Compatibility Report]
style A fill:#e1f5fe
style L fill:#c8e6c9
style H fill:#fff3e0
- Enter MCP Server URL - Supports both public and OAuth-protected servers
- Automatic OAuth Flow - Handles discovery, registration, and authorization
- Intelligent Testing - Generates appropriate test data for each tool
- Real-time Results - See tests run live with detailed feedback
- Compatibility Report - Get instant confirmation of ChatGPT, Claude, and Cursor support
// Tool: search_users
// Generated arguments:
{ "query": "test search", "limit": 10 }
// Tool: create_task
// Generated arguments:
{ "title": "Test Task", "priority": "medium" }
- Next.js 15 with App Router
- TypeScript
- MCP SDK
- Server-Sent Events for streaming
Report issues at github.com/scorecard-ai/mcp-eval/issues
PRs welcome! See CONTRIBUTING.md for guidelines.
Built by Scorecard AI, the leading platform for AI evaluation and testing.
MIT © Scorecard AI