The terminal dashboard for MCP servers. Now with AI Chat.
Manage, monitor, and debug your Model Context Protocol servers from a single pane of glass -- without leaving the terminal. Talk to AI about your servers and let it call tools on your behalf.
Features • Performance • Install • Quick Start • Tabs • AI Chat • Configuration • Keybindings
Most MCP tooling is browser-based, Python-heavy, or requires Docker. mcp-dashboard is a single Rust binary that installs in seconds and runs anywhere -- your laptop, a server over SSH, a CI runner.
- Zero dependencies -- no Node, no Python, no Docker
- Auto-discovers servers from Claude Desktop, Claude Code, and Cursor
- Persistent connections -- no more spawning/killing processes every health check
- Token cost estimation -- see how much context window each server eats (unique to this tool)
- Tool execution -- call any MCP tool directly from the terminal
- AI Chat -- talk to Claude, GPT, Gemini, or Claude Code about your servers, with full agentic tool execution
| Feature | Description |
|---|---|
| Dashboard | Real-time server health, tool counts, response time sparklines, token estimates |
| Inspector | Browse tools, enter JSON params, execute and see results inline |
| AI Chat | Converse with AI about your MCP servers -- 5 providers, streaming, agentic tool execution |
| Protocol Log | Every MCP method call with direction, timing, and error highlighting |
| Server Logs | Live stderr capture from server processes for debugging |
| Auto-Discovery | Finds servers from ~/.claude/.mcp.json, ~/.cursor/mcp.json, Claude Desktop config |
| HTTP Transport | Connect to remote MCP servers via Streamable HTTP, not just local stdio |
| Multi-Provider AI | Anthropic Claude, OpenAI/GPT, Google Gemini, Claude Code CLI, Cursor CLI |
| Agentic Tools | AI calls MCP tools directly during chat, results inline + logged to Protocol tab |
| Search/Filter | Filter servers by name with / -- handles large server collections |
| Token Estimation | Color-coded context window cost per server (green/yellow/orange/red) |
| Sparklines | Mini response time graphs showing performance trends |
| Help Overlay | Press ? for a complete keybinding reference |
The Hidden Cost of MCP Cold Starts
When an MCP client needs to call a tool, there are two approaches:
- Cold start: Spawn the server process, do the MCP handshake, make the call, shut down. This is what happens with one-off scripts, CI pipelines, and any workflow that doesn't keep servers alive between calls.
- Persistent connection: Connect once, keep the process alive, reuse the connection for every subsequent call. This is what mcp-dashboard does, and what IDE extensions (Claude Code, Cursor) do during active sessions.
The difference matters because the actual tool call is fast -- it's the startup that's slow. We benchmarked real production servers to see exactly how much.
Tested with the included mcp-latency benchmark tool against real MCP servers. Each cold-start round spawns a fresh process, initializes MCP, calls list_tools, and shuts down. Each warm-call round reuses an existing persistent connection.
Cold start breakdown (20 rounds):
Spawn + MCP Init: 128.6ms (99% of total)
list_tools call: 0.4ms ( 1% of total)
─────────────────────────────
Total per cold start: 129.9ms
Warm call (persistent connection, 50 rounds):
list_tools call: 0.1ms
Cold start breakdown (20 rounds):
Spawn + MCP Init: 115.4ms (94% of total)
list_tools call: 2.8ms ( 6% of total)
─────────────────────────────
Total per cold start: 123.2ms
Warm call (persistent connection, 50 rounds):
list_tools call: 0.5ms
The pattern is clear: 94-99% of a cold-start interaction is process spawn and MCP initialization, not the tool call itself.
The advantage compounds with every additional call because you pay the startup cost only once:
| Scenario | Cold Start (spawn each time) | Persistent (connect once) | Time Saved |
|---|---|---|---|
| 1 tool call | 130ms | 130ms | 0ms (same) |
| 10 tool calls | 1,300ms | 131ms | 1.2s |
| 50 tool calls | 6,500ms | 135ms | 6.4s |
| 100 tool calls | 13,000ms | 140ms | 12.9s |
Note: The first call on a persistent connection still pays the full startup cost (~130ms). The savings come from calls 2 through N, which skip spawn and init entirely.
For agentic AI sessions where the model chains 10-20 tool calls, this turns seconds of overhead into milliseconds.
On a persistent connection, calls can fire as fast as the server handles them:
| Server | 10 rapid calls | Throughput |
|---|---|---|
Rust (db-tunnel) |
0.8ms total | 12,629 calls/sec |
Node.js (mobile-api) |
6.0ms total | 1,673 calls/sec |
- Scripts and CI that spawn a server per invocation -- persistent connections save ~130ms per call
- AI agentic loops with many sequential tool calls -- 10+ calls go from seconds to milliseconds
- Health monitoring -- checking server status every 10s costs 0.1ms instead of 130ms
- The dashboard's Inspector tab -- rapid tool iteration without restart overhead
IDE extensions like Claude Code already maintain persistent connections during active sessions, so for single interactive calls in an IDE, the latency difference is less dramatic. The dashboard's advantage for IDE users is more about multi-server visibility, health monitoring, and the AI Chat tab than raw per-call speed.
The benchmark tool is included:
cargo run --release --bin mcp-latency -- /path/to/your/mcp-server [args...]
# Examples:
cargo run --release --bin mcp-latency -- node dist/index.js
cargo run --release --bin mcp-latency -- /path/to/rust-mcp-server --flagIt runs 20 cold-start rounds and 50 warm-call rounds, then shows a full breakdown with percentiles and throughput.
cargo install mcp-dashboardbrew tap nickslamecode/mcp-dashboard
brew install mcp-dashboardDownload from GitHub Releases.
git clone https://github.com/NicksLameCode/mcp-dashboard.git
cd mcp-dashboard
cargo install --path .# Just run it -- servers are auto-discovered from Claude/Cursor configs
mcp-dashboardThat's it. If you have MCP servers configured in Claude Desktop, Claude Code, or Cursor, they'll appear automatically.
To add servers manually, edit ~/.config/mcp-dashboard/servers.json (created on first run):
[
{
"name": "my-server",
"command": "node",
"args": ["dist/index.js"],
"cwd": "/path/to/server"
}
]The main view. Server list with status indicators, tool/resource/prompt counts, token cost estimates, and source badges. The detail panel shows:
- Server name, uptime, and response time
- Response time sparkline (last 60 checks)
- Token cost breakdown (tools/resources/prompts)
- Browsable tool, resource, and prompt lists (cycle with
Tab)
Interactive tool execution. Select a tool from the left panel, view its input schema, type JSON arguments, and press Enter to execute. Results appear inline with error highlighting.
A chronological log of every MCP protocol operation -- initialize, tools/list, tools/call, etc. Shows direction (→ sent, ← received), server name, method, result summary, and round-trip time.
Per-server stderr output captured from child processes. Select a server to view its debug output. Useful for diagnosing startup failures, malformed responses, or server-side errors.
A full AI conversation interface built into the dashboard. Talk to AI about your MCP servers and let it call tools on your behalf.
5 AI Providers:
| Provider | How it connects | Tool execution |
|---|---|---|
| Anthropic Claude | Streaming API with x-api-key |
Native tool_use blocks |
| OpenAI / GPT | Streaming API (also works with Ollama, LM Studio, Azure) | Function calling |
| Google Gemini | Streaming API with key param |
Coming soon |
| Claude Code | Subprocess via claude --print |
Via prompt context |
| Cursor | Subprocess via CLI | Via prompt context |
Multi-Server Context: Select which servers the AI knows about. Press Tab to cycle, Space to toggle. The AI sees full tool schemas, resource URIs, prompt definitions, and connection status for selected servers.
Agentic Tool Execution: The AI can call MCP tools directly during conversation. When it does:
- Tool call + arguments appear inline in the chat
- The tool executes against the real MCP server (30s timeout)
- Results appear inline and are logged to the Protocol tab
- The AI continues the conversation with the tool results
Streaming: Responses stream in real-time with a block cursor. Press Esc to cancel mid-stream.
Quick start:
# Set your API key
export ANTHROPIC_API_KEY="sk-ant-..."
# Launch the dashboard, press 5 for Chat
mcp-dashboardOr configure all providers in ~/.config/mcp-dashboard/ai.json (auto-created on first run).
[
{
"name": "my-server",
"command": "/path/to/binary",
"args": ["--flag", "value"],
"cwd": "/working/directory",
"env": { "API_KEY": "sk-..." },
"config_path": "/path/to/.mcp.json"
}
][
{
"name": "remote-server",
"transport": "http",
"url": "http://localhost:8080/mcp"
}
]| Field | Required | Description |
|---|---|---|
name |
Yes | Display name |
command |
Stdio only | Binary or command to run |
args |
No | Command line arguments |
cwd |
No | Working directory |
env |
No | Environment variables |
transport |
No | stdio (default) or http |
url |
HTTP only | Server endpoint URL |
config_path |
No | Path to config file (opened with e) |
The Chat tab reads from ~/.config/mcp-dashboard/ai.json (auto-created with defaults on first run):
{
"default_provider": "anthropic",
"anthropic": {
"api_key": "",
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096
},
"openai": {
"api_key": "",
"base_url": "https://api.openai.com/v1",
"model": "gpt-4o",
"max_tokens": 4096
},
"gemini": {
"api_key": "",
"model": "gemini-2.0-flash",
"max_tokens": 4096
},
"claude_code": {
"command": "claude",
"args": ["--print", "--output-format", "json"]
},
"cursor": {
"command": "cursor",
"args": ["--chat"]
}
}Environment variable fallbacks -- if api_key is empty in the config, these env vars are checked:
| Provider | Environment Variable |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Gemini | GEMINI_API_KEY |
The openai.base_url field supports any OpenAI-compatible endpoint -- point it at Ollama (http://localhost:11434/v1), LM Studio, Azure OpenAI, or any other compatible API.
Servers are automatically discovered from these locations:
| Source | Config Path |
|---|---|
| Claude Code | ~/.claude/.mcp.json, ~/.claude/mcp.json |
| Cursor | ~/.cursor/mcp.json |
| Claude Desktop | ~/.config/claude/claude_desktop_config.json |
Discovered servers appear alongside manual ones with a source badge in the dashboard. Duplicates (same command + args) are automatically deduplicated.
| Key | Action |
|---|---|
1 2 3 4 5 |
Switch tabs |
j / k |
Navigate list |
J / K |
Scroll detail panel |
r |
Refresh all / reconnect |
c |
Toggle connection |
e |
Edit config in $EDITOR |
/ |
Search / filter servers |
? |
Help overlay |
q |
Quit |
| Key | Action |
|---|---|
i |
Edit JSON parameters |
Enter |
Execute tool |
Esc |
Exit input mode |
| Key | Action |
|---|---|
i |
Enter input mode |
Enter |
Send message |
Esc |
Exit input / cancel streaming |
p |
Cycle AI provider |
n |
New conversation |
Tab |
Cycle server context |
Space |
Toggle server in/out of context |
J / K |
Scroll messages |
| Key | Action |
|---|---|
| type | Filter by name |
Enter |
Keep filter |
Esc |
Clear filter |
┌──────────────┐
│ Claude API │
│ OpenAI API │◄──── streaming SSE ────┐
│ Gemini API │ │
└──────────────┘ │
┌──────────────┐ │
│ claude --print│◄── subprocess ──┐ │
│ cursor --chat │ │ │
└──────────────┘ │ │
┌──────────────┴──────┴──┐
│ mcp-dashboard │
│ (persistent connections │
│ + AI chat + agentic) │
└─────┬───────┬───────┬────┘
│ │ │
stdio │ stdio│ HTTP │
│ │ │
┌─────▼─┐ ┌───▼───┐ ┌─▼──────────┐
│Server │ │Server │ │Remote Server│
│ (A) │ │ (B) │ │ (C) │
└───────┘ └───────┘ └─────────────┘
- Connect -- spawns stdio servers or opens HTTP connections
- Initialize -- MCP handshake with 15s timeout
- Discover -- queries tools, resources, and prompts in parallel
- Monitor -- health checks every 10s on existing connections
- Capture -- streams stderr from child processes in real-time
- Detect -- polls for server death every 500ms, marks as error
- Chat -- stream AI responses, inject MCP context, execute tools on behalf of the AI
Built with rmcp (MCP protocol) and ratatui (terminal UI).
Contributions are welcome. Please open an issue first to discuss what you'd like to change.
git clone https://github.com/NicksLameCode/mcp-dashboard.git
cd mcp-dashboard
cargo runLicensed under either of Apache License 2.0 or MIT License at your option.
