Observe, debug, and curate your AI agents. From traces to training data.
Powered by NVIDIA Nemotron for intelligent trace analysis, auto-labeling, and synthetic data generation.
vizpath is an open-source observability and intelligence platform for AI agents. It provides real-time execution tracing, visual debugging, Nemotron-powered trace analysis, and intelligent curation for building training datasets.
- Lightweight SDK: Minimal overhead tracing with async batching
- Real-time Visualization: Watch agent execution as it happens via WebSocket
- Interactive DAG: Explore execution graphs with D3.js zoom, pan, and drag
- Cost Attribution: Track token usage and costs per operation
- Framework Support: Native adapters for LangGraph, LangChain, AutoGen
- Nemotron Intelligence: Auto-analyze traces, detect issues, suggest improvements
- Self-Analysis: Deep agent evaluation for effectiveness, reasoning quality, and tool usage
- Synthetic Data: Generate training data variations and corrections from real traces
- Trace Clustering: K-means clustering with NIM embeddings for pattern discovery
- Training Data Curation: Label, score, and export curated traces for fine-tuning
git clone <your-vizpath-repo-url>
cd vizpath
export NVIDIA_API_KEY="nvapi-..." # Set this before demo.sh for intelligence features
./demo.sh # Auto-installs deps and starts everything
# Dashboard: http://localhost:3000
# API: http://localhost:8000git clone <your-vizpath-repo-url>
cd vizpath
cp .env.example .env
make bootstrap
make check-envThen start services locally:
docker-compose up -d postgres redis
make dev-server # terminal 1
make dev-dashboard # terminal 2This path keeps dependencies explicit for contributors who do not use the demo script.
# In a new terminal
python -m examples.code_agent.run "How does the intelligence module work?"
# Watch traces appear in real-time on the dashboard# SDK
pip install -e ./sdk
# Server
pip install -e "./server[dev]"
# Dashboard
cd dashboard && npm installfrom vizpath import tracer
@tracer.trace(name="research-task")
def research(topic):
result = call_llm(topic)
return result
# Traces are automatically sent to the vizpath server
research("quantum computing advances")from vizpath import tracer
@tracer.span(name="llm_call", span_type="llm")
def call_llm(prompt):
response = client.chat.completions.create(
model="nvidia/llama-3.3-nemotron-super-49b-v1.5",
messages=[{"role": "user", "content": prompt}],
)
tracer.set_span_tokens(
prompt_tokens=response.usage.prompt_tokens,
completion_tokens=response.usage.completion_tokens,
)
return response.choices[0].message.contentfrom vizpath.adapters import LangGraphAdapter
adapter = LangGraphAdapter()
app = adapter.wrap(your_langgraph_app)
result = app.invoke({"input": "research quantum computing"})┌─────────────┐ ┌─────────────────────────────┐ ┌─────────────┐
│ SDK │────▶│ Server (FastAPI) │────▶│ Dashboard │
│ (Python) │ │ ├── Traces API │ │ (React) │
└─────────────┘ │ ├── Curation API │ │ Dark Theme │
│ ├── Intelligence API │ │ D3.js DAG │
│ │ ├── Nemotron Analysis │ └─────────────┘
│ │ ├── Self-Analysis │
│ │ ├── Clustering │
│ │ └── Synthetic Data │
│ └── WebSocket (live traces) │
└─────────────────────────────┘
│
┌──────┴──────┐
│ PostgreSQL │
│ + Redis │
└─────────────┘
vizpath includes a built-in intelligence layer powered by NVIDIA NIM:
export NVIDIA_API_KEY="your_nvidia_api_key_here"Trace Analysis — Quality scoring, auto-labeling, and improvement suggestions:
curl -X POST http://localhost:8000/api/v1/intelligence/analyze \
-H "Content-Type: application/json" \
-d '{"trace_id": "your-trace-id"}'Self-Analysis — Deep evaluation of agent effectiveness:
python examples/self_analyze.py --trace-id <uuid>Synthetic Data — Generate training data from real traces:
curl -X POST http://localhost:8000/api/v1/intelligence/generate-synthetic \
-H "Content-Type: application/json" \
-d '{"trace_id": "your-trace-id", "mode": "variations", "n": 5}'vizpath/
├── sdk/ # Python tracing SDK
├── server/ # FastAPI backend
│ ├── app/
│ │ ├── intelligence/ # Nemotron-powered analysis
│ │ │ ├── llm.py # LLM labeler (analyze, self-analyze)
│ │ │ ├── embeddings.py # NIM embedding API
│ │ │ ├── clustering.py # K-means trace clustering
│ │ │ └── synthetic.py # Training data generation
│ │ ├── routes/ # API endpoints
│ │ └── models.py # SQLAlchemy ORM
│ └── tests/ # pytest test suite
├── dashboard/ # React + Tailwind dark theme UI
├── examples/ # Example agents and demos
│ ├── code_agent/ # Code analysis agent (main demo)
│ ├── research_agent/ # Research agent with mock tools
│ └── self_analyze.py # Trace self-analysis CLI demo
└── docs/ # Documentation
- Python 3.10+
- Node.js 20+
- NVIDIA API key (for intelligence features)
For local development, copy and edit the dashboard environment file:
cd dashboard
cp .env.example .envSupported variables:
VITE_API_BASE_URL: full HTTP origin and API base path (default/api/v1)VITE_WS_BASE_URL: WebSocket base URL for live traces (default current origin, auto-resolved tows://).
cp .env.example .env # copy defaults# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate
# Install SDK
pip install -e ./sdk
# Install server with dev dependencies
pip install -e "./server[dev]"
# Install dashboard
cd dashboard && npm install
# Set environment variables
export NVIDIA_API_KEY="your_nvidia_api_key_here"
export DATABASE_URL="sqlite:///vizpath.db"Use .env.example as your base config for local development:
cp .env.example .envKey local defaults:
SECURITY_STRICT_MODE=false: enables minimal secure headers by default, stricter CSP when set totrueCORS_ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173RATE_LIMIT_ENABLED=trueRATE_LIMIT_IP_RPM=240RATE_LIMIT_USER_RPM=120RATE_LIMIT_BURST_MULTIPLIER=1.0
NVIDIA key handling:
- Set
NVIDIA_API_KEYon the server only - Do not expose NVIDIA keys in browser code or dashboard env files
- Intelligence endpoints return
503if server-side key is not configured
Create a project and receive an API key once:
curl -X POST http://localhost:8000/api/v1/projects/ \
-H "Content-Type: application/json" \
-d '{"name":"my-project"}'For endpoint-level details and integration examples, use:
docs/api.md(API reference)
Rotate key with grace period (both old and new keys valid during grace):
curl -X POST http://localhost:8000/api/v1/projects/me/api-key/rotate \
-H "Content-Type: application/json" \
-H "X-API-Key: <current-key>" \
-d '{"grace_period_minutes":60}'Revoke the previous key immediately:
curl -X POST http://localhost:8000/api/v1/projects/me/api-key/revoke \
-H "Content-Type: application/json" \
-H "X-API-Key: <new-key>" \
-d '{"key_type":"previous"}'# Start the API server
cd server && uvicorn app.main:app --reload
# Start the dashboard (separate terminal)
cd dashboard && npm run devThe API is at http://localhost:8000, the dashboard at http://localhost:3000.
cd server && DATABASE_URL=sqlite:///test.db pytest tests/ -vContributions are welcome. Please read our Contributing Guide before submitting a PR.
Apache 2.0 - See LICENSE for details