OpenTelemetry instrumentation plugin for OpenCode. Automatically traces every AI coding session — LLM calls, tool executions, file edits, and context compactions — and exports them via OTLP/HTTP (protobuf) to any OpenTelemetry-compatible backend.
npm install opencode-otel-pluginIn your opencode.json:
{
"plugin": ["opencode-otel-plugin"]
}export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"Open an OpenCode session as usual. Traces and metrics are exported automatically — no code changes needed.
The fastest way to see your traces is with Jaeger running in Docker:
docker run -d --name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegerdata/all-in-one:latestSet the endpoint and start OpenCode:
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
opencodeOpen http://localhost:16686, select opencode from the service dropdown, and click Find Traces. You'll see a trace tree for each coding session:
invoke_agent opencode ← root span (session)
├── chat claude-sonnet-4-20250514 ← LLM request
├── execute_tool edit ← tool call (includes code.language)
├── execute_tool bash ← tool call
└── session_compaction ← context compaction
All configuration uses standard OpenTelemetry environment variables. No plugin-specific config needed.
| Variable | Description | Default |
|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT |
OTLP/HTTP base URL | http://localhost:4318 |
OTEL_EXPORTER_OTLP_HEADERS |
Auth headers (key=value, comma-separated) |
— |
OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE |
Metric temporality (cumulative, delta, lowmemory) |
cumulative |
OTEL_OPENCODE_FILTERED_TOOLS |
Comma-separated list of tool names to exclude from span generation (see Tool Span Filtering) | — (no filtering) |
By default, the plugin creates a span for every tool execution (read, glob, grep, edit, bash, etc.). In busy sessions, this can generate hundreds of low-value spans that clutter your traces.
Use OTEL_OPENCODE_FILTERED_TOOLS to exclude specific tool types from span generation while preserving metrics:
# Filter out noisy tools (read, glob, grep) — reduces trace volume by ~70%
export OTEL_OPENCODE_FILTERED_TOOLS="read,glob,grep"
# Filter a single tool
export OTEL_OPENCODE_FILTERED_TOOLS="read"
# Filter multiple tools with whitespace (trimmed automatically)
export OTEL_OPENCODE_FILTERED_TOOLS="read, glob, grep, bash"
# Disable filtering (default behavior)
unset OTEL_OPENCODE_FILTERED_TOOLSBehavior:
- Filtered tools: No span created, but
opencode.tool.invocationsmetric is still recorded - Non-filtered tools: Span created + metric recorded (unchanged behavior)
- Critical spans (
edit,write,git-commit,chat) are never filtered - Case-sensitive matching (
read≠Read)
Example trace tree with filtering:
# Without filtering: 50+ execute_tool spans per session
invoke_agent opencode
├── chat claude-sonnet-4-20250514
├── execute_tool read ← 50+ of these
├── execute_tool glob ← 20+ of these
├── execute_tool edit ← high-signal
└── execute_tool git-commit ← high-signal
# With OTEL_OPENCODE_FILTERED_TOOLS="read,glob,grep":
invoke_agent opencode
├── chat claude-sonnet-4-20250514
├── execute_tool edit ← high-signal preserved
└── execute_tool git-commit ← high-signal preserved
Grafana Cloud
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otlp-gateway-prod-us-central-0.grafana.net/otlp"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic $(echo -n '<instance-id>:<api-key>' | base64)"Honeycomb
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io"
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<your-api-key>"Dynatrace
export OTEL_EXPORTER_OTLP_ENDPOINT="https://{your-environment-id}.live.dynatrace.com/api/v2/otlp"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Api-Token {your-api-token}"
export OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE="delta"Create an API token in Dynatrace with openTelemetryTrace.ingest and metrics.ingest scopes.
Note: Dynatrace requires delta temporality for metrics — cumulative metrics are silently dropped. The
OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=deltasetting is mandatory.
Datadog
# Requires the Datadog Agent with OTLP ingestion enabled
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"OTel Collector
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"Use an OpenTelemetry Collector to fan out to multiple backends.
Each OpenCode session produces a trace tree with explicit parent-child relationships:
invoke_agent opencode ← root span (one per session)
├── chat {model} ← child span (one per LLM request)
├── execute_tool {tool_name} ← child span (one per tool call)
└── session_compaction ← child span (one per compaction)
Created when a session starts, ended on session.idle. One per coding session.
| Attribute | Type | Description |
|---|---|---|
gen_ai.operation.name |
string | Always "invoke_agent" |
gen_ai.agent.name |
string | Always "opencode" |
gen_ai.conversation.id |
string | OpenCode session ID |
service.version |
string | OpenCode version (set when installation.updated fires) |
vcs.repository.ref.name |
string | Current git branch |
enduser.id |
string | Git author email (git config user.email) |
vcs.repository.url.full |
string | Git remote URL |
opencode.session.request_count |
number | Total LLM requests in session (set when span ends) |
Created on chat.params hook, ended when the assistant message arrives with token counts.
| Attribute | Type | Description |
|---|---|---|
gen_ai.operation.name |
string | Always "chat" |
gen_ai.request.model |
string | Model ID sent in the request (e.g., claude-sonnet-4-20250514) |
gen_ai.provider.name |
string | Provider identifier (e.g., anthropic, openai) |
gen_ai.conversation.id |
string | OpenCode session ID |
vcs.repository.ref.name |
string | Current git branch |
enduser.id |
string | Git author email |
vcs.repository.url.full |
string | Git remote URL |
gen_ai.usage.input_tokens |
number | Input tokens consumed (set on completion) |
gen_ai.usage.output_tokens |
number | Output tokens generated (set on completion) |
gen_ai.response.model |
string | Model ID from the response |
gen_ai.response.finish_reasons |
string[] | Finish reasons array (e.g., ["end_turn"]) |
error.type |
string | Error class name (set only on failure) |
Created on tool.execute.before, ended on tool.execute.after. Includes flattened tool output metadata.
| Attribute | Type | Description |
|---|---|---|
gen_ai.operation.name |
string | Always "execute_tool" |
gen_ai.tool.name |
string | Tool name (e.g., edit, write, bash, glob) |
gen_ai.tool.call.id |
string | Unique tool call identifier |
gen_ai.conversation.id |
string | OpenCode session ID |
vcs.repository.ref.name |
string | Current git branch |
enduser.id |
string | Git author email |
vcs.repository.url.full |
string | Git remote URL |
gen_ai.tool.output.title |
string | Tool output title (set on completion) |
gen_ai.tool.output.metadata.* |
string | Flattened tool output metadata (max 32 keys, depth 3, strings truncated to 256 chars) |
code.language |
string | Detected programming language (edit, write, and apply_patch tools only; derived from file extension) |
opencode.file.additions |
number | Lines added (edit, write, and apply_patch tools only; omitted when zero) |
opencode.file.deletions |
number | Lines removed (edit, write, and apply_patch tools only; omitted when zero) |
Created as an instant span when OpenCode compacts the conversation context.
| Attribute | Type | Description |
|---|---|---|
gen_ai.conversation.id |
string | OpenCode session ID |
vcs.repository.ref.name |
string | Current git branch |
enduser.id |
string | Git author email |
vcs.repository.url.full |
string | Git remote URL |
Histogram measuring token consumption per LLM call. Unit: {token}.
| Attribute | Type | Description |
|---|---|---|
gen_ai.operation.name |
string | Always "chat" |
gen_ai.provider.name |
string | Provider identifier |
gen_ai.request.model |
string | Model ID |
gen_ai.token.type |
string | "input" or "output" — recorded as two separate data points per call |
Histogram measuring LLM request latency. Unit: s (seconds).
| Attribute | Type | Description |
|---|---|---|
gen_ai.operation.name |
string | Always "chat" |
gen_ai.provider.name |
string | Provider identifier |
gen_ai.request.model |
string | Model ID |
error.type |
string | Error class name (present only on failed requests) |
Counter tracking total LLM requests. Unit: {request}.
| Attribute | Type | Description |
|---|---|---|
gen_ai.request.model |
string | Model ID |
gen_ai.provider.name |
string | Provider identifier |
Counter tracking context compaction events. Unit: {compaction}. No attributes.
Counter tracking lines of code added or removed by edit, write, and apply_patch tools. Unit: {line}.
| Attribute | Type | Description |
|---|---|---|
code.language |
string | Detected programming language (omitted for unknown file extensions) |
opencode.change.type |
string | "added" or "removed" |
Counter tracking tool executions. Unit: {invocation}.
| Attribute | Type | Description |
|---|---|---|
gen_ai.tool.name |
string | Tool name (e.g., edit, bash, glob, read) |
Counter tracking git commits and PR mutations performed during sessions. Unit: {operation}.
| Attribute | Type | Description |
|---|---|---|
opencode.vcs.operation |
string | Operation type: "commit", "pr_create", "pr_merge", "pr_close", "pr_reopen", "pr_review", or "pr_edit" |
opencode.vcs.source |
string | Detection source: "cli" (bash commands) or "mcp" (MCP tool names) |
vcs.repository.url.full |
string | Git remote URL |
vcs.repository.ref.name |
string | Current git branch |
Attached to all exported signals (traces and metrics), identifying the session environment. Set once at plugin initialization.
| Attribute | Type | Description |
|---|---|---|
service.name |
string | Always "opencode" |
host.name |
string | Machine hostname |
enduser.id |
string | Git author email (git config user.email) |
opencode.project.name |
string | Project identifier from OpenCode |
vcs.repository.url.full |
string | Git remote URL |
vcs.repository.ref.name |
string | Current git branch |
opencode.worktree |
string | Git worktree path |
opencode.directory |
string | Current working directory |
-
Check the endpoint is reachable:
curl -s -o /dev/null -w "%{http_code}" http://localhost:4318/v1/tracesExpect
200or405. Connection refused = endpoint is down. -
Verify the env var is set in the OpenCode process:
echo $OTEL_EXPORTER_OTLP_ENDPOINT
Must be set before starting OpenCode. The plugin reads it at init time.
-
Check for auth errors (cloud backends): Look for
401or403in your collector logs. EnsureOTEL_EXPORTER_OTLP_HEADERSis set correctly.
Metrics export on a 30-second interval. Wait at least 30s after activity, or end the session (triggers a flush).
If the plugin can't initialize (e.g., missing OTel packages), it returns no-op hooks and OpenCode continues normally. Check that opencode-otel-plugin appears in your installed packages:
npm ls opencode-otel-pluginThis plugin follows OpenTelemetry GenAI Semantic Conventions where applicable:
- Span names:
{operation} {target}(e.g.,chat claude-sonnet-4-20250514,execute_tool bash) gen_ai.*attributes for LLM operationsgen_ai.client.*metric names for token usage and operation duration- Custom
opencode.*attributes for plugin-specific signals
git clone https://github.com/felixti/opencode-otel-plugin.git
cd opencode-otel-plugin
bun install
bun test # 116 tests, 180 assertions
bun run typecheck # tsc --noEmit
bun run build # dist/index.js + dist/index.d.tsMIT