-
Notifications
You must be signed in to change notification settings - Fork 44
feat(vercel): add gen.ai.input.messages + gen.ai.output.messages #734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds two new span attributes (gen_ai.input.messages, gen_ai.output.messages), updates AI SDK transformations to populate them from prompts/responses (text, object, tool calls), extends unit and integration tests to assert these attributes, and adds a HAR recording fixture for the integration test. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App as Application
participant SDK as AI SDK
participant TX as Transformations
participant OT as Tracing/Span
App->>SDK: generateText({ messages / prompt })
SDK->>TX: emit prompt data for transform
rect rgba(220,235,255,0.4)
note right of TX: Input processing
TX->>TX: parse messages/prompts → parts (type/text)
TX->>OT: set gen_ai.input.messages (JSON string)
TX->>OT: set LLM_PROMPTS.* and roles
end
SDK-->>App: provider returns response
SDK->>TX: emit response data for transform
rect rgba(220,255,220,0.4)
note right of TX: Output processing
TX->>TX: normalize response → assistant parts (text | object | tool_call)
TX->>OT: set gen_ai.output.messages (JSON string)
TX->>OT: set LLM_COMPLETIONS.* and tool_calls
end
TX-->>SDK: updated attributes
SDK-->>App: span with new attributes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to ae63671 in 3 minutes and 15 seconds. Click for details.
- Reviewed
931
lines of code in5
files - Skipped
0
files when reviewing. - Skipped posting
7
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:398
- Draft comment:
Consider adding a test case where both 'ai.prompt.messages' and 'ai.prompt' are provided to confirm correct precedence and that only one transformation applies. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:863
- Draft comment:
The tests for token conversion properly handle string values; consider adding edge cases for non-numeric or malformed token inputs to ensure robustness. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:408
- Draft comment:
In the test for unescaping JSON escape sequences, verify that the expected unescaped string exactly matches the intended formatting. Consider clarifying expected line breaks to avoid potential discrepancies. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:360
- Draft comment:
The test 'should preserve mixed content arrays' preserves the original JSON when mixed types are present. Confirm if this behavior is intentional or if non-text items should be filtered out. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1260
- Draft comment:
In the tool calls transformation tests, consider adding an extra check that validates the ordering of transformed tools, especially when the toolCalls array is empty or contains additional unexpected fields. - Reason this comment was not posted:
Comment looked like it was already resolved.
6. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1123
- Draft comment:
Overall, the tests effectively verify that original AI SDK attributes are removed after transformation. Consider adding inline comments within complex test cases to delineate setup, action, and verification sections for improved readability. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1538
- Draft comment:
The span transformation tests (using transformAiSdkSpan) are comprehensive. Ensure that transformation functions handle spans with unexpected or extra attributes without side effects. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is purely informative and suggests ensuring that transformation functions handle spans with unexpected or extra attributes without side effects. This falls under the category of asking the PR author to ensure behavior is intended or tested, which is against the rules.
Workflow ID: wflow_Yzg0kVakXoTeVSHe
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (8)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
25-26
: New attributes look good; consider documenting JSON shape.Add a brief comment indicating these are JSON-serialized arrays of messages with
{ role, parts: [{ type, content | tool_call }] }
to guide instrumentation authors and keep consistency.LLM_COMPLETIONS: "gen_ai.completion", + // JSON string: [{ role: "user"|"assistant"|"system", parts: [{ type: "text", content: string }] }] LLM_INPUT_MESSAGES: "gen_ai.input.messages", + // JSON string: [{ role: "assistant", parts: [{ type: "text"|"tool_call", content?: string, tool_call?: { name: string, arguments: string } }] }] LLM_OUTPUT_MESSAGES: "gen_ai.output.messages",packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har (1)
197-205
: Redact cookies in recordings to avoid committing transient identifiers.The HAR stores
set-cookie
values. Recommend strippingset-cookie
and response cookies before persisting to disk to reduce churn and avoid leaking identifiers.Outside this HAR, extend the Polly beforePersist hook to remove response cookies:
server.any().on("beforePersist", (_req, recording) => { recording.request.headers = recording.request.headers.filter( ({ name }) => name !== "authorization", ); + // Redact response cookies and Set-Cookie headers + if (Array.isArray(recording.response?.headers)) { + recording.response.headers = recording.response.headers.filter( + ({ name }) => name?.toLowerCase() !== "set-cookie", + ); + } + if (Array.isArray(recording.response?.cookies)) { + recording.response.cookies = []; + } });packages/instrumentation-openai/test/instrumentation.test.ts (2)
27-67
: Test helper: be resilient to missing roles and iterate by content for outputs.Some instrumentations may not set
${LLM_COMPLETIONS}.n.role
. Iterate using.content
presence and default role to"assistant"
for outputs. Also, place imports before helper for readability.-// Minimal transformation function to test LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES -const transformToStandardFormat = (attributes: any) => { +// Minimal transformation function to test LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES +const transformToStandardFormat = (attributes: any) => { // Transform prompts to LLM_INPUT_MESSAGES const inputMessages = []; let i = 0; while (attributes[`${SpanAttributes.LLM_PROMPTS}.${i}.role`]) { const role = attributes[`${SpanAttributes.LLM_PROMPTS}.${i}.role`]; const content = attributes[`${SpanAttributes.LLM_PROMPTS}.${i}.content`]; if (role && content) { inputMessages.push({ role, parts: [{ type: "text", content }], }); } i++; } if (inputMessages.length > 0) { attributes[SpanAttributes.LLM_INPUT_MESSAGES] = JSON.stringify(inputMessages); } // Transform completions to LLM_OUTPUT_MESSAGES const outputMessages = []; let j = 0; - while (attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.role`]) { - const role = attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.role`]; - const content = - attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.content`]; - if (role && content) { + while ( + attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.content`] !== undefined + ) { + const role = + attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.role`] || "assistant"; + const content = attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.content`]; + if (content) { outputMessages.push({ role, parts: [{ type: "text", content }], }); } j++; } if (outputMessages.length > 0) { attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify(outputMessages); } };
923-971
: Rename test to reflect that transformation, not instrumentation, creates the attributes.Title implies the instrumentation sets these directly, but the helper populates them. Clarify to avoid confusion about coverage.
-it("should set LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES attributes for chat completions", async () => { +it("should derive LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES from prompts/completions for chat completions", async () => {packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1183-1536
: Good coverage for gen_ai input/output messages. Consider adding ai.prompt variant.Tests look solid for messages, object, and tool calls. Add a small case ensuring
ai.prompt
(single prompt) also populatesgen_ai.input.messages
.+ it("should create gen_ai.input.messages for single ai.prompt", () => { + const attributes: any = { + "ai.prompt": JSON.stringify({ prompt: "Single prompt case" }), + }; + transformAiSdkAttributes(attributes); + const input = JSON.parse(attributes[SpanAttributes.LLM_INPUT_MESSAGES]); + assert.strictEqual(input.length, 1); + assert.strictEqual(input[0].role, "user"); + assert.strictEqual(input[0].parts[0].type, "text"); + assert.strictEqual(input[0].parts[0].content, "Single prompt case"); + });packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (3)
63-76
: Avoid overwriting existing gen_ai.output.messages when multiple response sources exist.If both
ai.response.text
andai.response.toolCalls
are present, later transforms will clobber earlier output messages. Either append or only set when absent.- attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + if (!attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]) { + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ + outputMessage, + ]); + }
87-99
: Mirror the non-clobbering behavior for object responses.Same rationale as text path.
- attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + if (!attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]) { + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ + outputMessage, + ]); + }
111-142
: Handle non-string toolCalls input and merge with existing output messages.AI SDKs may store
ai.response.toolCalls
as an array. Also, if an output message already exists (e.g., from text), merge tool_call parts instead of replacing.- if (AI_RESPONSE_TOOL_CALLS in attributes) { + if (AI_RESPONSE_TOOL_CALLS in attributes) { try { - const toolCalls = JSON.parse( - attributes[AI_RESPONSE_TOOL_CALLS] as string, - ); + const raw = attributes[AI_RESPONSE_TOOL_CALLS]; + const toolCalls = Array.isArray(raw) ? raw : JSON.parse(raw as string); attributes[`${SpanAttributes.LLM_COMPLETIONS}.0.role`] = ROLE_ASSISTANT; const toolCallParts: any[] = []; toolCalls.forEach((toolCall: any, index: number) => { if (toolCall.toolCallType === "function") { attributes[ `${SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.${index}.name` ] = toolCall.toolName; attributes[ `${SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.${index}.arguments` ] = toolCall.args; toolCallParts.push({ type: TYPE_TOOL_CALL, tool_call: { name: toolCall.toolName, arguments: toolCall.args, }, }); } }); - if (toolCallParts.length > 0) { - const outputMessage = { - role: ROLE_ASSISTANT, - parts: toolCallParts, - }; - attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); - } + if (toolCallParts.length > 0) { + const outputMessage = { role: ROLE_ASSISTANT, parts: toolCallParts }; + const existing = attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]; + if (existing) { + try { + const arr = JSON.parse(existing); + if (Array.isArray(arr) && arr[0]?.role === ROLE_ASSISTANT) { + arr[0].parts = [...(arr[0].parts || []), ...toolCallParts]; + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify(arr); + } + } catch { + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([outputMessage]); + } + } else { + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([outputMessage]); + } + } delete attributes[AI_RESPONSE_TOOL_CALLS]; } catch { // Ignore parsing errors } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
packages/ai-semantic-conventions/src/SemanticAttributes.ts
(1 hunks)packages/instrumentation-openai/test/instrumentation.test.ts
(2 hunks)packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(10 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
packages/instrumentation-openai/test/instrumentation.test.ts
**/recordings/**
📄 CodeRabbit inference engine (CLAUDE.md)
Store HTTP interaction recordings for tests under recordings/ directories for Polly.js replay
Files:
packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/instrumentation-openai/test/instrumentation.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}
: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-openai/test/instrumentation.test.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/instrumentation-openai/test/instrumentation.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/instrumentation-openai/test/instrumentation.test.ts
🧬 Code graph analysis (3)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(370-382)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
🔇 Additional comments (4)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (4)
22-26
: Replacing literals with constants is good.Using TYPE_* and ROLE_* improves consistency and reduces typos.
154-158
: Content normalization via TYPE_TEXT checks looks solid.Good defensive handling of arrays/objects/JSON strings and preserving non-text as JSON.
Also applies to: 169-169, 183-183
262-287
: Confirm intent: input messages drop non-text parts.
gen_ai.input.messages
flattens mixed content to text-only parts. If preserving non-text (e.g., images) is desired later, we’ll need to carry typed parts here too. For now this aligns with tests.
301-314
: Nice touch: single ai.prompt also emits standardized input messages.Keeps
LLM_PROMPTS
andgen_ai.input.messages
in sync.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 5cfd993 in 53 seconds. Click for details.
- Reviewed
179
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:24
- Draft comment:
Removed transformToStandardFormat function; ensure the new standardized attributes are fully covered by integration tests. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the PR author to ensure that the new standardized attributes are fully covered by integration tests. This falls under the rule of not asking the author to ensure that the change is tested, which is not allowed.
2. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:239
- Draft comment:
Consider asserting the exact span name instead of using a prefix match (startsWith) for more precise validation. - Reason this comment was not posted:
Confidence changes required:50%
<= threshold50%
None
Workflow ID: wflow_fDigO34HBJONhC4S
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (3)
238-244
: Make span selection deterministic.Filter by workflow name to avoid grabbing the wrong ai.generateText span when multiple exist.
- const aiSdkSpan = spans.find((span) => span.name.startsWith("ai.generateText")); + const aiSdkSpan = spans.find( + (span) => + span.name.startsWith("ai.generateText") && + span.attributes["traceloop.workflow.name"] === "test_transformations_workflow", + );
251-258
: Harden assertions before indexing parts[0].Assert non‑empty parts arrays to avoid undefined access if the shape changes.
assert.strictEqual(inputMessages[0].role, "user"); assert.ok(Array.isArray(inputMessages[0].parts)); + assert.ok(inputMessages[0].parts.length > 0); assert.strictEqual(inputMessages[0].parts[0].type, "text");
268-274
: Mirror safety check for output parts.Same reasoning for assistant parts array.
assert.strictEqual(outputMessages[0].role, "assistant"); assert.ok(Array.isArray(outputMessages[0].parts)); + assert.ok(outputMessages[0].parts.length > 0); assert.strictEqual(outputMessages[0].parts[0].type, "text");
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/traceloop-sdk/recordings/AI-SDK-Transformations_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
(1 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/traceloop-sdk/recordings/AI-SDK-Transformations_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
🪛 GitHub Actions: CI
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
[error] 1-1: Prettier formatting check failed (exit code 1). Code style issues found in the file. Run 'pnpm prettier --write' to fix.
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2)
22-22
: Good: using semantic attribute constants.Importing SpanAttributes prevents string drift and aligns with conventions.
221-274
: Fix CI: Prettier formatting failed — run Prettier and commit formatting fixesFile: packages/traceloop-sdk/test/ai-sdk-integration.test.ts (lines 221–274).
Verification couldn't run in the sandbox (pnpm errored: no package.json — repo wasn't cloned). Run
pnpm prettier --write packages/traceloop-sdk/test/ai-sdk-integration.test.ts
andpnpm prettier --check .
locally and commit the formatting fixes, or re-run verification without skip_cloning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed a68d951 in 45 seconds. Click for details.
- Reviewed
457
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har:29
- Draft comment:
Ensure the HAR recording’s 'postData' structure (using the 'input' array with 'input_text') correctly simulates the expected data format for generating LLM_INPUT_MESSAGES. Verify consistency with transformation logic. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the PR author to verify consistency and ensure correctness, which violates the rule against asking the author to confirm or ensure behavior. It doesn't provide a specific suggestion or point out a clear issue.
2. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:93
- Draft comment:
Updated prompt to 'What is 2+2? Give a brief answer.' is consistently applied in the test validations for both LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES. The test correctly parses and asserts the expected structure. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_Y9wVFENp3aOvWuFW
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 49a4534 in 46 seconds. Click for details.
- Reviewed
15
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:239
- Draft comment:
The change appears to be a reformatting of the arrow function callback. Ensure this multi-line style (with a trailing comma) is consistent with the project's style guidelines. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_uq0hiouo56VAKOse
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
238-241
: Flush before reading exporter to prevent flakinessForce-flush spans before calling memoryExporter.getFinishedSpans(). This race has been observed earlier in this suite.
assert.ok(result); assert.ok(result.text); - const spans = memoryExporter.getFinishedSpans(); + // Ensure all spans are exported before assertions + await traceloop.forceFlush(); + const spans = memoryExporter.getFinishedSpans(); const aiSdkSpan = spans.find((span) => span.name.startsWith("ai.generateText"));
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (3)
22-22
: Good: using semantic attribute constantsImporting SpanAttributes from @traceloop/ai-semantic-conventions is correct. Consider using these constants consistently (e.g.,
${SpanAttributes.LLM_PROMPTS}.0.role
) instead of hardcoded strings elsewhere in this file to avoid drift.
239-241
: Tighten span selection to the current workflowFilter by traceloop.workflow.name to avoid accidental matches if multiple ai.generateText spans exist.
- const aiSdkSpan = spans.find((span) => span.name.startsWith("ai.generateText")); + const aiSdkSpan = spans.find( + (span) => + span.name.startsWith("ai.generateText") && + span.attributes["traceloop.workflow.name"] === "test_transformations_workflow", + );
244-247
: Assert attribute type before JSON.parse for clearer failuresAdd a quick typeof check so parse errors surface with a precise message.
- assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES]); + assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES]); + assert.strictEqual(typeof aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES], "string"); const inputMessages = JSON.parse( aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES] as string, ); @@ - assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]); + assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]); + assert.strictEqual(typeof aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES], "string"); const outputMessages = JSON.parse( aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] as string, );Also applies to: 262-264
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
(1 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (4)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
235-239
: Force-flush spans before reading the exporter to avoid flaky tests.Add an explicit flush before calling getFinishedSpans().
Apply:
assert.ok(result); assert.ok(result.text); - const spans = memoryExporter.getFinishedSpans(); + await traceloop.forceFlush(); + const spans = memoryExporter.getFinishedSpans();
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (3)
22-22
: Use SpanAttributes consistently (avoid hardcoded attribute keys).Good import. However, elsewhere in this file assertions still use string literals like "gen_ai.system" and "gen_ai.prompt.0.role". Prefer
${SpanAttributes.LLM_SYSTEM}
,${SpanAttributes.LLM_PROMPTS}.0.role
,${SpanAttributes.LLM_COMPLETIONS}.0.content
, etc., for consistency and future-proofing.Example pattern:
assert.strictEqual( generateTextSpan.attributes[SpanAttributes.LLM_SYSTEM], "OpenAI", ); assert.strictEqual( generateTextSpan.attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`], "user", );
239-241
: Filter the target span by workflow to ensure you assert on the right one.Narrow the search using the workflow attribute.
Apply:
- const aiSdkSpan = spans.find((span) => - span.name.startsWith("ai.generateText"), - ); + const aiSdkSpan = spans.find( + (span) => + span.name.startsWith("ai.generateText") && + span.attributes[SpanAttributes.TRACELOOP_WORKFLOW_NAME] === + "test_transformations_workflow", + );
270-276
: Strengthen validation: assert output text equals result.text.This tightens the contract for output message content.
Apply:
assert.strictEqual(outputMessages[0].parts[0].type, "text"); - assert.ok(outputMessages[0].parts[0].content); - assert.ok(typeof outputMessages[0].parts[0].content === "string"); + assert.ok(outputMessages[0].parts[0].content); + assert.ok(typeof outputMessages[0].parts[0].content === "string"); + assert.strictEqual(outputMessages[0].parts[0].content, result.text);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (4)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
Important
Add structured input/output message attributes for AI spans to capture chat prompts and responses in a standardized format.
gen_ai.input.messages
andgen_ai.output.messages
attributes inSemanticAttributes.ts
for structured AI span messages.transformResponseText
,transformResponseObject
,transformResponseToolCalls
, andtransformPrompts
inai-sdk-transformations.ts
to handle new message attributes.ai-sdk-transformations.test.ts
for new message attributes.ai-sdk-integration.test.ts
to verifyLLM_INPUT_MESSAGES
andLLM_OUTPUT_MESSAGES
attributes.recording.har
.This description was created by
for 49a4534. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Tests
Chores