Query and analyze LLM traces with AI assistance. Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE.
An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions.
- Quick Start
- Installation
- Features
- Configuration
- Tools Reference
- Example Queries
- Common Workflows
- Troubleshooting
- Development
- Support
No installation required! Configure your client to run the server directly from PyPI:
// Add to claude_desktop_config.json:
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}Or use uvx (alternative):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}That's it! Ask Claude: "Show me traces with errors from the last hour"
# Run without installing (recommended)
pipx run opentelemetry-mcp --backend jaeger --url http://localhost:16686
# Or with uvx
uvx opentelemetry-mcp --backend jaeger --url http://localhost:16686This approach:
- âś… Always uses the latest version
- âś… No global installation needed
- âś… Isolated environment automatically
- âś… Works on all platforms
Claude Desktop
Configure the MCP server in your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Using pipx (recommended):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}Using uvx (alternative):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}For Traceloop backend:
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "traceloop",
"BACKEND_URL": "https://api.traceloop.com",
"BACKEND_API_KEY": "your_traceloop_api_key_here"
}
}
}
}Using the repository instead of pipx?
If you're developing locally with the cloned repository, use one of these configurations:
Option 1: Wrapper script (easy backend switching)
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "/absolute/path/to/opentelemetry-mcp-server/start_locally.sh"
}
}
}Option 2: UV directly (for multiple backends)
{
"mcpServers": {
"opentelemetry-mcp-jaeger": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}Claude Code
Claude Code works with MCP servers configured in your Claude Desktop config. Once configured above, you can use the server with Claude Code CLI:
# Verify the server is available
claude-code mcp list
# Use Claude Code with access to your OpenTelemetry traces
claude-code "Show me traces with errors from the last hour"Codeium (Windsurf)
- Open Windsurf
- Navigate to Settings → MCP Servers
- Click Add New MCP Server
- Add this configuration:
Using pipx (recommended):
{
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}Using uvx (alternative):
{
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}Using the repository instead?
{
"opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}Cursor
- Open Cursor
- Navigate to Settings → MCP
- Click Add new MCP Server
- Add this configuration:
Using pipx (recommended):
{
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}Using uvx (alternative):
{
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}Using the repository instead of pipx?
{
"opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}Gemini CLI
Configure the MCP server in your Gemini CLI config file (~/.gemini/config.json):
Using pipx (recommended):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}Using uvx (alternative):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}Then use Gemini CLI with your traces:
gemini "Analyze token usage for gpt-4 requests today"{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}Prerequisites:
Optional: Install globally
If you prefer to install the command globally:
# Install with pipx
pipx install opentelemetry-mcp
# Verify
opentelemetry-mcp --help
# Upgrade
pipx upgrade opentelemetry-mcpOr with pip:
pip install opentelemetry-mcp- 🔌 Multiple Backend Support - Connect to Jaeger, Grafana Tempo, or Traceloop
- 🤖 LLM-First Design - Specialized tools for analyzing AI application traces
- 🔍 Advanced Filtering - Generic filter system with powerful operators
- 📊 Token Analytics - Track and aggregate LLM token usage across models and services
- ⚡ Fast & Type-Safe - Built with async Python and Pydantic validation
| Tool | Description | Use Case |
|---|---|---|
search_traces |
Search traces with advanced filters | Find specific requests or patterns |
search_spans |
Search individual spans | Analyze specific operations |
get_trace |
Get complete trace details | Deep-dive into a single trace |
get_llm_usage |
Aggregate token usage metrics | Track costs and usage trends |
list_services |
List available services | Discover what's instrumented |
find_errors |
Find traces with errors | Debug failures quickly |
list_llm_models |
Discover models in use | Track model adoption |
get_llm_model_stats |
Get model performance stats | Compare model efficiency |
get_llm_expensive_traces |
Find highest token usage | Optimize costs |
get_llm_slow_traces |
Find slowest operations | Improve performance |
| Feature | Jaeger | Tempo | Traceloop |
|---|---|---|---|
| Search traces | âś“ | âś“ | âś“ |
| Advanced filters | âś“ | âś“ | âś“ |
| Span search | âś“* | âś“ | âś“ |
| Token tracking | âś“ | âś“ | âś“ |
| Error traces | âś“ | âś“ | âś“ |
| LLM tools | âś“ | âś“ | âś“ |
* Jaeger requires service_name parameter for span search
If you're contributing to the project or want to make local modifications:
# Clone the repository
git clone https://github.com/traceloop/opentelemetry-mcp-server.git
cd opentelemetry-mcp-server
# Install dependencies with UV
uv sync
# Or install in development mode with editable install
uv pip install -e ".[dev]"| Backend | Type | URL Example | Notes |
|---|---|---|---|
| Jaeger | Local | http://localhost:16686 |
Popular open-source option |
| Tempo | Local/Cloud | http://localhost:3200 |
Grafana's trace backend |
| Traceloop | Cloud | https://api.traceloop.com |
Requires API key |
Option 1: Environment Variables (Create .env file - see .env.example)
BACKEND_TYPE=jaeger
BACKEND_URL=http://localhost:16686Option 2: CLI Arguments (Override environment)
opentelemetry-mcp --backend jaeger --url http://localhost:16686
opentelemetry-mcp --backend traceloop --url https://api.traceloop.com --api-key YOUR_KEYConfiguration Precedence: CLI arguments > Environment variables > Defaults
All Configuration Options
| Variable | Type | Default | Description |
|---|---|---|---|
BACKEND_TYPE |
string | jaeger |
Backend type: jaeger, tempo, or traceloop |
BACKEND_URL |
URL | - | Backend API endpoint (required) |
BACKEND_API_KEY |
string | - | API key (required for Traceloop) |
BACKEND_TIMEOUT |
integer | 30 |
Request timeout in seconds |
LOG_LEVEL |
string | INFO |
Logging level: DEBUG, INFO, WARNING, ERROR |
MAX_TRACES_PER_QUERY |
integer | 100 |
Maximum traces to return per query (1-1000) |
Complete .env example:
# Backend configuration
BACKEND_TYPE=jaeger
BACKEND_URL=http://localhost:16686
# Optional: API key (mainly for Traceloop)
BACKEND_API_KEY=
# Optional: Request timeout (default: 30s)
BACKEND_TIMEOUT=30
# Optional: Logging level
LOG_LEVEL=INFO
# Optional: Max traces per query (default: 100)
MAX_TRACES_PER_QUERY=100Backend-Specific Setup
BACKEND_TYPE=jaeger
BACKEND_URL=http://localhost:16686BACKEND_TYPE=tempo
BACKEND_URL=http://localhost:3200BACKEND_TYPE=traceloop
BACKEND_URL=https://api.traceloop.com
BACKEND_API_KEY=your_api_key_hereNote: The API key contains project information. The backend uses a project slug of
"default"and Traceloop resolves the actual project/environment from the API key.
The easiest way to run the server:
./start_locally.shThis script handles all configuration and starts the server in stdio mode (perfect for Claude Desktop integration). To switch backends, simply edit the script and uncomment your preferred backend.
For advanced use cases or custom configurations, you can run the server manually.
Start the MCP server with stdio transport for local/Claude Desktop integration:
# If installed with pipx/pip
opentelemetry-mcp
# If running from cloned repository with UV
uv run opentelemetry-mcp
# With backend override (pipx/pip)
opentelemetry-mcp --backend jaeger --url http://localhost:16686
# With backend override (UV)
uv run opentelemetry-mcp --backend jaeger --url http://localhost:16686Start the MCP server with HTTP/SSE transport for remote access:
# If installed with pipx/pip
opentelemetry-mcp --transport http
# If running from cloned repository with UV
uv run opentelemetry-mcp --transport http
# Specify custom host and port (pipx/pip)
opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000
# With UV
uv run opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000The HTTP server will be accessible at http://localhost:8000/sse by default.
Transport Use Cases:
- stdio transport: Local use, Claude Desktop integration, single process
- HTTP transport: Remote access, multiple clients, network deployment, sample applications
Search for traces with flexible filtering:
{
"service_name": "my-app",
"start_time": "2024-01-01T00:00:00Z",
"end_time": "2024-01-01T23:59:59Z",
"gen_ai_system": "openai",
"gen_ai_model": "gpt-4",
"min_duration_ms": 1000,
"has_error": false,
"limit": 50
}Parameters:
service_name- Filter by serviceoperation_name- Filter by operationstart_time/end_time- ISO 8601 timestampsmin_duration_ms/max_duration_ms- Duration filtersgen_ai_system- LLM provider (openai, anthropic, etc.)gen_ai_model- Model name (gpt-4, claude-3-opus, etc.)has_error- Filter by error statustags- Custom tag filterslimit- Max results (1-1000, default: 100)
Returns: List of trace summaries with token counts
Get complete trace details including all spans and OpenLLMetry attributes:
{
"trace_id": "abc123def456"
}Returns: Full trace tree with:
- All spans with attributes
- Parsed OpenLLMetry data for LLM spans
- Token usage per span
- Error information
Get aggregated token usage metrics:
{
"start_time": "2024-01-01T00:00:00Z",
"end_time": "2024-01-01T23:59:59Z",
"service_name": "my-app",
"gen_ai_system": "openai",
"limit": 1000
}Returns: Aggregated metrics with:
- Total prompt/completion/total tokens
- Breakdown by model
- Breakdown by service
- Request counts
List all available services:
{}Returns: List of service names
Find traces with errors:
{
"start_time": "2024-01-01T00:00:00Z",
"service_name": "my-app",
"limit": 50
}Returns: Error traces with:
- Error messages and types
- Stack traces (truncated)
- LLM-specific error info
- Error span details
Natural Language: "Show me OpenAI traces from the last hour that took longer than 5 seconds"
Tool Call: search_traces
{
"service_name": "my-app",
"gen_ai_system": "openai",
"min_duration_ms": 5000,
"start_time": "2024-01-15T10:00:00Z",
"limit": 20
}Response:
{
"traces": [
{
"trace_id": "abc123...",
"service_name": "my-app",
"duration_ms": 8250,
"total_tokens": 4523,
"gen_ai_system": "openai",
"gen_ai_model": "gpt-4"
}
],
"count": 1
}Natural Language: "How many tokens did we use for each model today?"
Tool Call: get_llm_usage
{
"start_time": "2024-01-15T00:00:00Z",
"end_time": "2024-01-15T23:59:59Z",
"service_name": "my-app"
}Response:
{
"summary": {
"total_tokens": 125430,
"prompt_tokens": 82140,
"completion_tokens": 43290,
"request_count": 487
},
"by_model": {
"gpt-4": {
"total_tokens": 85200,
"request_count": 156
},
"gpt-3.5-turbo": {
"total_tokens": 40230,
"request_count": 331
}
}
}Natural Language: "Show me all errors from the last hour"
Tool Call: find_errors
{
"start_time": "2024-01-15T14:00:00Z",
"service_name": "my-app",
"limit": 10
}Response:
{
"errors": [
{
"trace_id": "def456...",
"service_name": "my-app",
"error_message": "RateLimitError: Too many requests",
"error_type": "openai.error.RateLimitError",
"timestamp": "2024-01-15T14:23:15Z"
}
],
"count": 1
}Natural Language: "What's the performance difference between GPT-4 and Claude?"
Tool Call 1: get_llm_model_stats for gpt-4
{
"model_name": "gpt-4",
"start_time": "2024-01-15T00:00:00Z"
}Tool Call 2: get_llm_model_stats for claude-3-opus
{
"model_name": "claude-3-opus-20240229",
"start_time": "2024-01-15T00:00:00Z"
}Natural Language: "Which requests used the most tokens today?"
Tool Call: get_llm_expensive_traces
{
"limit": 10,
"start_time": "2024-01-15T00:00:00Z",
"min_tokens": 5000
}-
Identify expensive operations:
Use get_llm_expensive_traces to find high-token requests -
Analyze by model:
Use get_llm_usage to see which models are costing the most -
Investigate specific traces:
Use get_trace with the trace_id to see exact prompts/responses
-
Find slow operations:
Use get_llm_slow_traces to identify latency issues -
Check for errors:
Use find_errors to see failure patterns -
Analyze finish reasons:
Use get_llm_model_stats to see if responses are being truncated
-
Discover models in use:
Use list_llm_models to see all models being called -
Compare model statistics:
Use get_llm_model_stats for each model to compare performance -
Identify shadow AI:
Look for unexpected models or services in list_llm_models results
# With UV
uv run pytest
# With coverage
uv run pytest --cov=openllmetry_mcp --cov-report=html
# With pip
pytest# Format code
uv run ruff format .
# Lint
uv run ruff check .
# Type checking
uv run mypy src/# Test backend connectivity
curl http://localhost:16686/api/services # Jaeger
curl http://localhost:3200/api/search/tags # TempoMake sure your API key is set correctly:
export BACKEND_API_KEY=your_key_here
# Or use --api-key CLI flag
opentelemetry-mcp --api-key your_key_here- Check time range (use recent timestamps)
- Verify service names with
list_services - Check backend has traces:
curl http://localhost:16686/api/services - Try searching without filters first
- Ensure your traces have OpenLLMetry instrumentation
- Check that
gen_ai.usage.*attributes exist in spans - Verify with
get_traceto see raw span attributes
- Cost calculation with built-in pricing tables
- Model performance comparison tools
- Prompt pattern analysis
- MCP resources for common queries
- Caching layer for frequent queries
- Support for additional backends (SigNoz, ClickHouse)
Contributions are welcome! Please ensure:
- All tests pass:
pytest - Code is formatted:
ruff format . - No linting errors:
ruff check . - Type checking passes:
mypy src/
Apache 2.0 License - see LICENSE file for details
- OpenLLMetry - OpenTelemetry instrumentation for LLMs
- Model Context Protocol - MCP specification
- Claude Desktop - AI assistant with MCP support
For issues and questions:
- GitHub Issues: https://github.com/traceloop/opentelemetry-mcp-server/issues
- PyPI Package: https://pypi.org/project/opentelemetry-mcp/
- Traceloop Community: https://traceloop.com/slack