Observability and tracing platform for AI agents.
TraceAgent gives you complete visibility into what your autonomous agents are actually doing - every tool call, file operation, terminal command, and decision path, recorded and visualized in real time.
Compatible with modern agent frameworks such as LangChain, TraceAgent integrates naturally into existing agent pipelines, letting you trace chains, tool executions, callbacks, memory interactions, and multi-step reasoning flows with minimal instrumentation.
- Why TraceAgent?
- Features
- Instrumenting Your Agent
- Docker Images
- Examples
- Installation
- Getting Help
- License
Traditional observability tools were built for microservices - they don't understand what it means for an agent to "call a tool", "write a file", or "retry a failed command".
TraceAgent fills that gap with an agent-first tracing specification and a purpose-built audit UI, giving you full data ownership through a self-hosted setup. With first-class support for orchestration ecosystems like LangChain, TraceAgent captures and reconstructs complete execution across chains, agents, tools, prompts, and callbacks - enabling reproducibility, debugging, compliance auditing, and performance analysis for production-grade AI systems.
| Feature | Description |
|---|---|
| Real-time tracing | Capture tool calls, file operations, shell commands, and artifacts as they happen. |
| Execution timeline | Visualize agent behavior through interactive timelines and execution graphs. |
| LangChain integration | Native callback handler for LangChain agents and chains, with zero boilerplate. |
| Self-hosted | Full data ownership with a FastAPI backend and SQLite/PostgreSQL support. |
TraceAgent is organized into four packages. Click any name to open its full README:
Wrap your agent's actions with the SDK to start recording. See the full SDK README for installation steps, basic usage, and the complete client reference.
from trace_agent_sdk import TraceAgentClient
client = TraceAgentClient("http://localhost:8000")
# Start a tracked run
run = client.start_run("my-agent", "Perform a system task")
print("RUN:")
print(run)
# Record side-effects
file_result = run.record_file_write(
"app.py",
before_content="",
after_content="print('Hello World')"
)
print("\nFILE RESULT:")
print(file_result)
command_result = run.record_command(
["python", "app.py"],
exit_code=0
)
print("\nCOMMAND RESULT:")
print(command_result)
# Close the session
finish_result = run.finish({"status": "success"})
print("\nFINISH RESULT:")
print(finish_result)Add the callback handler to any LangChain agent - all chain steps, LLM calls, and tool uses are captured automatically. See the full LangChain README for the list of captured events and configuration options.
from trace_agent_langchain import TraceAgentLangChainCallback
agent.invoke({"input": "..."}, callbacks=[TraceAgentLangChainCallback(run)])TraceAgent's behavior can be customized via environment variables.
| Variable | Default | Description |
|---|---|---|
TRACE_AGENT_DATABASE_URL |
sqlite:///./data/trace_agent.db |
Database(SQLite or PostgreSQL) connection URL used by the backend to persist trace data. . |
TRACE_AGENT_SERVER_URL |
http://localhost:8000 |
Backend URL used by the UI and SDK to communicate. |
TRACE_AGENT_AUDIT_METRICS_ENABLED |
true |
Enables or disables audit metrics collection on the server. |
TRACE_AGENT_UI_PORT |
8080 |
Port where the frontend UI is exposed. |
OPENAI_BASE_URL |
(none) | Optional. Used in examples and LangChain to point to local LLMs like LM Studio. |
Pre-built images are available for the backend and frontend:
| Component | Image |
|---|---|
| TraceAgent Server | ghcr.io/lixussoftware/trace-agent-server:latest |
| TraceAgent UI | ghcr.io/lixussoftware/trace-agent-ui:latest |
Make sure the server is running at http://127.0.0.1:8000 before running any example. Most scripts load configuration from a local .env file, so that is the easiest place to set the backend URL and model settings.
If you want to run the LangChain example, install the extra first:
uv sync --extra dev --extra providers --extra langchainMinimal SDK smoke test. It starts a run, registers a mock get_weather tool, performs a single model turn, and closes the run with the assistant response.
uv run examples/sample_agent.pyUse this when you want to confirm that the SDK can create runs, record tool calls, and finish cleanly.
LangChain integration example. It attaches TraceAgentLangChainCallback to a LangChain agent so chain steps, LLM calls, tool usage, and retriever activity are traced automatically.
uv run examples/langchain_agent.pyThis example expects an OpenAI-compatible endpoint such as LM Studio and uses the following environment variables: TRACE_AGENT_BASE_URL, OPENAI_BASE_URL, and OPENAI_API_KEY.
Local-model SDK example. It uses TraceAgentClient with LM Studio or any other OpenAI-compatible local model, registers a tool, and records a single traced turn.
uv run examples/lm_studio_agent.pyThis is a good fit when you want to validate local model connectivity with TRACE_AGENT_PROXY_URL, TRACE_AGENT_TEST_MODEL, and TRACE_AGENT_PROXY_TIMEOUT.
Richer planning demo. It exposes multiple tools for weather, budget policy, venue selection, travel time, and venue cost estimation, then lets the model choose the minimum set of calls needed. It also supports several built-in scenarios through --scenario and an optional --prompt override.
uv run examples/lm_studio_planner_agent.py --scenario meetup_planAvailable scenarios:
| Scenario | Description |
|---|---|
meetup_plan |
Plans a realistic team meetup in Madrid with weather, budget, venue, and travel constraints. |
weather_gate |
Checks whether an outdoor meetup is viable without doing extra venue or budget work. |
budget_only |
Verifies a selected venue against the budget and avoids unrelated tool calls. |
barcelona_shortlist |
Builds a short list of Barcelona venues while prioritizing logistics and reasonable travel from Sants. |
Synthetic debugger and UI demo. It records command executions, file reads and writes, patches, and failure states without needing a live model, which makes it useful for testing divergence detection and trace visualization.
uv run examples/coding_agent_debugger_demo.py --scenario correct_editAvailable scenarios:
| Scenario | Description |
|---|---|
correct_edit |
A successful agent run with the expected file edits. |
wrong_file |
The agent modifies the wrong file and fails to recover. |
retry_without_adaptation |
The agent retries the same failing action without changing strategy. |
same_tools_different_artifact |
The tool sequence stays the same, but the final artifact changes. |
Full-fidelity trace demo. It combines a model turn, tool retries, shell commands, file reads and writes, patches, artifacts, and derived decisions in a single run.
uv run examples/rich_demo_agent.pyUse this example when you want the densest possible run for inspecting the dashboard, timeline, and artifact views.
Each TraceAgent package can be installed independently via pip:
pip install trace-agent-sdk
pip install trace-agent-langchain
pip install trace-agent-server
pip install trace-agent-uiOr install all packages at once:
pip install trace-agent-sdk trace-agent-langchain trace-agent-server trace-agent-ui- Bug reports and feature requests - GitHub Issues
- Server-side debugging - See the Server README for log locations and how to interpret them.
- UI not connecting? - Verify that
TRACE_AGENT_SERVER_URLpoints to the correct backend host.
MIT - see the LICENSE file for details.