Description
Feature request
Add a small attribute-enrichment hook to @ai-sdk/otel’s OpenTelemetry integration so downstream integrations can add provider-specific OpenTelemetry attributes while AI SDK still owns span creation.
Something like:
import { OpenTelemetry } from "@ai-sdk/otel";
registerTelemetry(
new OpenTelemetry({
enrichSpanAttributes: ({ spanType, operationId, callId, runtimeContext }) => {
return {
...getCustomAttributes(runtimeContext, spanType),
};
},
}),
);
Or more concretely:
type EnrichSpanAttributesContext = {
callId: string;
operationId: string;
spanName: string;
spanType: "operation" | "model" | "tool" | "embedding" | "rerank";
runtimeContext?: Record<string, unknown>;
functionId?: string;
existingAttributes: Readonly<Attributes>;
};
type EnrichSpanAttributes = (
context: EnrichSpanAttributesContext,
) => Attributes | undefined;
The hook would run immediately before @ai-sdk/otel calls tracer.startSpan(...).
const attributes = selectAttributes(telemetry, {
...aiSdkAttributes,
});
const extraAttributes = options.enrichSpanAttributes?.({
callId: event.callId,
operationId,
spanName,
spanType,
runtimeContext,
functionId: telemetry?.functionId,
existingAttributes: attributes,
});
const span = tracer.startSpan(spanName, {
attributes: {
...attributes,
...extraAttributes,
},
}, parentContext);
Problem
AI SDK v7 has a nice telemetry architecture: the AI SDK emits semantic lifecycle callbacks, and @ai-sdk/otel turns those into OpenTelemetry spans. That works well when the only desired attributes are the AI SDK’s built-in attributes.
The gap is that external observability systems sometimes need to add additional span attributes that are specific to their ingestion pipeline.
For Langfuse, examples are:
"langfuse.observation.metadata.feature": "chat"
"langfuse.observation.prompt.name": "support-agent"
"langfuse.observation.prompt.version": 3
These are not AI SDK attributes and should not become AI SDK-owned semantics. But they need to be present on the OTel span when it is exported.
The user already has a natural place to provide this data: runtimeContext.
await generateText({
model,
prompt,
runtimeContext: {
langfuse: {
metadata: { feature: "chat" },
prompt: { name: "support-agent", version: 3 },
},
},
telemetry: {
functionId: "chat-route",
},
});
The issue is that @ai-sdk/otel currently fully owns span creation and does not expose a supported way to add attributes at span creation time.
Current Workaround
Because there is no enrichment hook, Langfuse has to wrap the OpenTelemetry tracer.
Today the workaround looks conceptually like this:
@ai-sdk/otel creates span
-> calls tracer.startSpan(name, options, parentContext)
-> Langfuse tracer wrapper intercepts startSpan
-> adds Langfuse attributes
-> forwards to real tracer
This works, but it is awkward because the OpenTelemetry Tracer API only receives:
name
options
parentContext
It does not receive AI SDK concepts like:
callId
operationId
runtimeContext
span type
tool call id
So Langfuse has to reconstruct enough AI SDK context from the outside. That is why we maintain bookkeeping maps like:
callId -> Langfuse runtime context
spanId -> callId
This is exactly the kind of cleverness we would rather avoid. It is not business logic; it is a workaround for not having an official enrichment point.
Why AI SDK Is The Right Place
@ai-sdk/otel already has the information at the perfect moment.
When it creates the root span, it knows:
event.callId
event.operationId
event.runtimeContext
telemetry.functionId
When it creates the model-call span, it knows:
callId
operationId
step context
model call attributes
parent span context
When it creates tool spans, it knows:
callId
toolCallId
toolName
tool input
tool span parent context
So the clean design is:
AI SDK keeps owning the span shape.
AI SDK keeps deciding when spans are created.
AI SDK keeps deciding default attributes.
Integrations can append additional attributes.
That is much safer than asking every integration to wrap tracers and infer span ownership.
Why This Does Not Need To Expose Unstable Internals
The hook does not need to expose private callState, raw spans, internal maps, or mutable lifecycle state.
A stable minimal context would be enough:
{
callId,
operationId,
spanName,
spanType,
runtimeContext,
functionId,
existingAttributes,
}
Most of these are already public concepts:
callId: already present in telemetry events
operationId: already present in telemetry events / attributes
runtimeContext: user-provided public input
functionId: public telemetry option
existingAttributes: what AI SDK is already about to emit
spanType would be the only new abstraction, and it can be intentionally coarse:
"operation" | "model" | "tool" | "embedding" | "rerank"
That avoids forcing downstream integrations to parse span names like .doGenerate or .doStream.
If they want to be even more conservative, they could omit spanType initially and expose only spanName / operationId, but spanType would make the hook much more stable over time.
Why This Benefits AI SDK Users Generally
This is not Langfuse-specific.
Any observability integration may need to add attributes such as:
"tenant.id"
"app.route"
"experiment.variant"
"customer.plan"
"deployment.region"
"custom.trace.link"
Users often already have those values in runtimeContext. The current API lets them pass runtime context to the AI SDK, but the official OTel integration does not let that context influence emitted span attributes except through AI SDK’s own built-in mapping.
A hook would let users do:
new OpenTelemetry({
enrichSpanAttributes: ({ runtimeContext }) => ({
"tenant.id": runtimeContext?.tenantId,
"experiment.variant": runtimeContext?.experiment,
}),
});
That keeps custom application semantics out of AI SDK core while making the official OTel integration more extensible.
AI SDK Version
v7 beta
Code of Conduct
Description
Feature request
Add a small attribute-enrichment hook to
@ai-sdk/otel’sOpenTelemetryintegration so downstream integrations can add provider-specific OpenTelemetry attributes while AI SDK still owns span creation.Something like:
Or more concretely:
The hook would run immediately before
@ai-sdk/otelcallstracer.startSpan(...).Problem
AI SDK v7 has a nice telemetry architecture: the AI SDK emits semantic lifecycle callbacks, and
@ai-sdk/otelturns those into OpenTelemetry spans. That works well when the only desired attributes are the AI SDK’s built-in attributes.The gap is that external observability systems sometimes need to add additional span attributes that are specific to their ingestion pipeline.
For Langfuse, examples are:
These are not AI SDK attributes and should not become AI SDK-owned semantics. But they need to be present on the OTel span when it is exported.
The user already has a natural place to provide this data:
runtimeContext.The issue is that
@ai-sdk/otelcurrently fully owns span creation and does not expose a supported way to add attributes at span creation time.Current Workaround
Because there is no enrichment hook, Langfuse has to wrap the OpenTelemetry tracer.
Today the workaround looks conceptually like this:
This works, but it is awkward because the OpenTelemetry
TracerAPI only receives:It does not receive AI SDK concepts like:
So Langfuse has to reconstruct enough AI SDK context from the outside. That is why we maintain bookkeeping maps like:
This is exactly the kind of cleverness we would rather avoid. It is not business logic; it is a workaround for not having an official enrichment point.
Why AI SDK Is The Right Place
@ai-sdk/otelalready has the information at the perfect moment.When it creates the root span, it knows:
When it creates the model-call span, it knows:
When it creates tool spans, it knows:
So the clean design is:
That is much safer than asking every integration to wrap tracers and infer span ownership.
Why This Does Not Need To Expose Unstable Internals
The hook does not need to expose private
callState, raw spans, internal maps, or mutable lifecycle state.A stable minimal context would be enough:
Most of these are already public concepts:
spanTypewould be the only new abstraction, and it can be intentionally coarse:That avoids forcing downstream integrations to parse span names like
.doGenerateor.doStream.If they want to be even more conservative, they could omit
spanTypeinitially and expose onlyspanName/operationId, butspanTypewould make the hook much more stable over time.Why This Benefits AI SDK Users Generally
This is not Langfuse-specific.
Any observability integration may need to add attributes such as:
Users often already have those values in
runtimeContext. The current API lets them pass runtime context to the AI SDK, but the official OTel integration does not let that context influence emitted span attributes except through AI SDK’s own built-in mapping.A hook would let users do:
That keeps custom application semantics out of AI SDK core while making the official OTel integration more extensible.
AI SDK Version
v7 beta
Code of Conduct