Real data. Beautiful charts.
A lightweight, canvas-powered React component for live LLM and agent monitoring.
Visually stunning, buttery-smooth performance with Catmull-Rom splines, pulsing live dots, automatic health scoring, and a calm, production-ready aesthetic.
npm install agentstatA live-animating chart in four lines, with the built-in simulation and a ready-made roster of demo agents:
'use client';
import { AgentStat, demoAgents } from 'agentstat';
export default function Demo() {
return <AgentStat agents={demoAgents} simulateData height={400} />;
}That's it. No agent objects to construct, no ref, no wiring. Use this to verify the install and see what the component looks like.
When you're ready for your own agents, createAgent(id, name, color?) fills in the structural defaults so you only name what matters:
import { AgentStat, createAgent } from 'agentstat';
const agents = [
createAgent('chat-agent', 'Chat Assistant', '#1d4ed8'),
createAgent('planner', 'Planner', '#B91C1C'),
];
export default function MyMonitor() {
return <AgentStat agents={agents} simulateData height={400} />;
}
⚠️ Memoize youragentsarray. Either wrap it inuseMemoor declare it at module scope. AgentStat treatsagentsas the roster — which agents exist and in what order — and reads runtime values (tokensRate,progress,status,visible) from its own internal store, which is updated byref.current.updateAgent(...). Passing a fresh array literal on every render is fine as long as the id list doesn't change; if it does, any per-agent state for ids that were added/removed is resynced. UseupdateAgentfor runtime values — changes tocolor,config, etc. on existing agents via theagentsprop are not applied.
In production, AgentStat visualises your real telemetry — it does not simulate data. simulateData defaults to false; push live metrics imperatively via the ref:
'use client';
import { useRef } from 'react';
import { AgentStat, type Agent, type AgentStatRef } from 'agentstat';
const agent: Agent = {
id: 'chat-agent',
name: 'Chat Assistant',
color: '#1d4ed8',
data: [],
current: { tokensRate: 0, progress: 0, status: 'active' },
visible: true,
};
export default function MonitoredChat() {
const ref = useRef<AgentStatRef>(null);
// Wire this up to your telemetry source (Vercel AI SDK, LangChain, WS/SSE, MCP, …).
// ref.current?.updateAgent('chat-agent', tokensPerSecond, progressPercent, 'active');
return (
<AgentStat
ref={ref}
agents={[agent]}
simulateData={false}
height={560}
/>
);
}See the full integration guide for ready-made patterns:
→ Real Data Integration — Vercel AI SDK (useCompletion), LangChain / LangGraph, WebSocket / SSE, Model Context Protocol (MCP), VS Code extensions.
- Buttery smooth curves — Catmull-Rom splines with zero jitter
- Live pulsing dot with soft glow and area fill
- Automatic health scoring — token efficiency, stability, hallucination risk, latency trend
- Multi-agent support with individual visibility toggles
- Hover tooltips & click callbacks
- Fully imperative ref API — works perfectly with Vercel AI SDK, LangChain, WebSocket, MCP, etc.
- Retina-ready & performant — built for long-running production monitoring
History window (v0.1): the chart shows the most recent ~420 samples per agent. The on-screen time span therefore depends on how frequently you call
updateAgent(...)(e.g. ~20s at 20 Hz, ~80s at 5 Hz). A configurable time window is planned for v0.2.
AgentStat uses Canvas2D and modern CSS color syntax (rgb(r g b / alpha)). This means effectively Chromium 111+, Firefox 113+, Safari 16.4+ (all shipped in 2023). If you need to support older browsers, pin to a transpile target that polyfills these.
