An open source observability platform for AI agents and task queues. AgentQ helps you trace agent runs, inspect LLM calls, monitor workers, and debug failures across your AI infrastructure.
-
server/-- A Next.js web application that provides the observability dashboard. Displays run timelines, span trees, token usage, queue depths, worker status, and AI-powered search. -
sdk/-- A Python SDK that instruments your agents. Drop in the@agentdecorator and auto-instrumentation patches for OpenAI, Anthropic, and Google Gemini to start sending traces.
The fastest way to run the server locally is with Docker Compose:
cp server/.env.example server/.env
# Edit server/.env with your database credentials
docker compose upThis starts PostgreSQL, Redis, and the AgentQ server at http://localhost:3000.
Alternatively, run the server directly:
cd server
npm install
npm run devSee server/README.md for full setup instructions.
agentq-server releases publish a container image to GHCR:
docker pull ghcr.io/ryandao/agentq-server:<version>Stable releases tagged as server-v<version> also update the latest tag.
pip install agentqimport agentq
agentq.init(endpoint="http://localhost:3000")
agentq.instrument() # auto-patches OpenAI, Anthropic, Gemini
@agentq.agent(name="my-agent")
def my_agent(task):
# Your agent logic here -- all LLM calls are traced automatically
return resultSee sdk/README.md for the full SDK documentation.
Your Python Agents (SDK) --OTLP--> AgentQ Server --SQL--> PostgreSQL
|
Redis (queue inspection)
The SDK sends OpenTelemetry-compatible traces to the server's /v1/traces endpoint. The server stores runs and spans in PostgreSQL and optionally inspects Celery/Redis queues for live worker status.