Skip to content

Conversation

nina-kollman
Copy link
Contributor

@nina-kollman nina-kollman commented Sep 17, 2025

Important

Add structured input/output message attributes for AI spans to capture chat prompts and responses in a standardized format.

  • Features:
    • Added gen_ai.input.messages and gen_ai.output.messages attributes in SemanticAttributes.ts for structured AI span messages.
    • Supports multi-turn conversations and mixed content, preserving roles and text content.
  • Transformations:
    • Updated transformResponseText, transformResponseObject, transformResponseToolCalls, and transformPrompts in ai-sdk-transformations.ts to handle new message attributes.
  • Tests:
    • Added unit tests in ai-sdk-transformations.test.ts for new message attributes.
    • Added integration test in ai-sdk-integration.test.ts to verify LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES attributes.
    • Included HAR recording fixture for chat completion scenarios in recording.har.

This description was created by Ellipsis for 49a4534. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • New Features

    • Added standardized LLM input/output message attributes (gen_ai.input.messages, gen_ai.output.messages) to capture chat prompts, assistant replies, and tool-call parts.
    • Populates these attributes for prompts, completions, objects, and tool-call responses, preserving roles and text parts for multi-turn conversations.
  • Tests

    • Added unit and integration tests validating input/output message attributes across text, object, and tool-call scenarios.
  • Chores

    • Added a HAR recording fixture to support integration testing.

Copy link

coderabbitai bot commented Sep 17, 2025

Walkthrough

Adds two new span attributes (gen_ai.input.messages, gen_ai.output.messages), updates AI SDK transformations to populate them from prompts/responses (text, object, tool calls), extends unit and integration tests to assert these attributes, and adds a HAR recording fixture for the integration test.

Changes

Cohort / File(s) Summary
Semantic Conventions
packages/ai-semantic-conventions/src/SemanticAttributes.ts
Added LLM_INPUT_MESSAGES: "gen_ai.input.messages" and LLM_OUTPUT_MESSAGES: "gen_ai.output.messages" to exported SpanAttributes.
AI SDK Transformations
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Introduced constants for types/roles; updated response transforms to produce gen_ai.output.messages (assistant text, object, and tool_call parts) and remove legacy keys; updated prompt transforms to produce gen_ai.input.messages (stringified JSON array of role/parts) and remove legacy prompt keys; preserved existing LLM_PROMPTS/LLM_COMPLETIONS mappings.
Unit Tests
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
Added comprehensive tests for mapping prompts → gen_ai.input.messages and responses → gen_ai.output.messages (text, object, tool calls, multi-turn). Note: the new test suite block appears duplicated in the diff.
Integration Test
packages/traceloop-sdk/test/ai-sdk-integration.test.ts
Added an integration test that asserts LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES are present and correctly structured on the ai.generateText span.
Recording Fixture
packages/traceloop-sdk/recordings/.../recording.har
Added HAR recording capturing an OpenAI chat completion used by the integration test.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant App as Application
  participant SDK as AI SDK
  participant TX as Transformations
  participant OT as Tracing/Span

  App->>SDK: generateText({ messages / prompt })
  SDK->>TX: emit prompt data for transform
  rect rgba(220,235,255,0.4)
    note right of TX: Input processing
    TX->>TX: parse messages/prompts → parts (type/text)
    TX->>OT: set gen_ai.input.messages (JSON string)
    TX->>OT: set LLM_PROMPTS.* and roles
  end

  SDK-->>App: provider returns response
  SDK->>TX: emit response data for transform
  rect rgba(220,255,220,0.4)
    note right of TX: Output processing
    TX->>TX: normalize response → assistant parts (text | object | tool_call)
    TX->>OT: set gen_ai.output.messages (JSON string)
    TX->>OT: set LLM_COMPLETIONS.* and tool_calls
  end

  TX-->>SDK: updated attributes
  SDK-->>App: span with new attributes
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • avivhalfon

Poem

I hop through prompts and messages bold,
I tuck their parts in JSON fold.
Inputs, outputs, tools that chime,
I log them neat, one message time.
Hooray — traced tales in tidy rhyme 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly and accurately summarizes the primary change—adding gen.ai input and output message attributes—and matches the code and test updates in the PR while remaining concise and scoped (feat(vercel):).
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch nk/input_messages_2

Comment @coderabbitai help to get the list of available commands and usage tips.

@nina-kollman nina-kollman changed the title Nk/input messages 2 feat(vercel): add gen.ai.input.messages + gen.ai.output.messages Sep 17, 2025
@nina-kollman nina-kollman marked this pull request as ready for review September 17, 2025 11:40
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to ae63671 in 3 minutes and 15 seconds. Click for details.
  • Reviewed 931 lines of code in 5 files
  • Skipped 0 files when reviewing.
  • Skipped posting 7 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:398
  • Draft comment:
    Consider adding a test case where both 'ai.prompt.messages' and 'ai.prompt' are provided to confirm correct precedence and that only one transformation applies.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:863
  • Draft comment:
    The tests for token conversion properly handle string values; consider adding edge cases for non-numeric or malformed token inputs to ensure robustness.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:408
  • Draft comment:
    In the test for unescaping JSON escape sequences, verify that the expected unescaped string exactly matches the intended formatting. Consider clarifying expected line breaks to avoid potential discrepancies.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:360
  • Draft comment:
    The test 'should preserve mixed content arrays' preserves the original JSON when mixed types are present. Confirm if this behavior is intentional or if non-text items should be filtered out.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1260
  • Draft comment:
    In the tool calls transformation tests, consider adding an extra check that validates the ordering of transformed tools, especially when the toolCalls array is empty or contains additional unexpected fields.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
6. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1123
  • Draft comment:
    Overall, the tests effectively verify that original AI SDK attributes are removed after transformation. Consider adding inline comments within complex test cases to delineate setup, action, and verification sections for improved readability.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1538
  • Draft comment:
    The span transformation tests (using transformAiSdkSpan) are comprehensive. Ensure that transformation functions handle spans with unexpected or extra attributes without side effects.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is purely informative and suggests ensuring that transformation functions handle spans with unexpected or extra attributes without side effects. This falls under the category of asking the PR author to ensure behavior is intended or tested, which is against the rules.

Workflow ID: wflow_Yzg0kVakXoTeVSHe

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (8)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)

25-26: New attributes look good; consider documenting JSON shape.

Add a brief comment indicating these are JSON-serialized arrays of messages with { role, parts: [{ type, content | tool_call }] } to guide instrumentation authors and keep consistency.

   LLM_COMPLETIONS: "gen_ai.completion",
+  // JSON string: [{ role: "user"|"assistant"|"system", parts: [{ type: "text", content: string }] }]
   LLM_INPUT_MESSAGES: "gen_ai.input.messages",
+  // JSON string: [{ role: "assistant", parts: [{ type: "text"|"tool_call", content?: string, tool_call?: { name: string, arguments: string } }] }]
   LLM_OUTPUT_MESSAGES: "gen_ai.output.messages",
packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har (1)

197-205: Redact cookies in recordings to avoid committing transient identifiers.

The HAR stores set-cookie values. Recommend stripping set-cookie and response cookies before persisting to disk to reduce churn and avoid leaking identifiers.

Outside this HAR, extend the Polly beforePersist hook to remove response cookies:

 server.any().on("beforePersist", (_req, recording) => {
   recording.request.headers = recording.request.headers.filter(
     ({ name }) => name !== "authorization",
   );
+  // Redact response cookies and Set-Cookie headers
+  if (Array.isArray(recording.response?.headers)) {
+    recording.response.headers = recording.response.headers.filter(
+      ({ name }) => name?.toLowerCase() !== "set-cookie",
+    );
+  }
+  if (Array.isArray(recording.response?.cookies)) {
+    recording.response.cookies = [];
+  }
 });
packages/instrumentation-openai/test/instrumentation.test.ts (2)

27-67: Test helper: be resilient to missing roles and iterate by content for outputs.

Some instrumentations may not set ${LLM_COMPLETIONS}.n.role. Iterate using .content presence and default role to "assistant" for outputs. Also, place imports before helper for readability.

-// Minimal transformation function to test LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES
-const transformToStandardFormat = (attributes: any) => {
+// Minimal transformation function to test LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES
+const transformToStandardFormat = (attributes: any) => {
   // Transform prompts to LLM_INPUT_MESSAGES
   const inputMessages = [];
   let i = 0;
   while (attributes[`${SpanAttributes.LLM_PROMPTS}.${i}.role`]) {
     const role = attributes[`${SpanAttributes.LLM_PROMPTS}.${i}.role`];
     const content = attributes[`${SpanAttributes.LLM_PROMPTS}.${i}.content`];
     if (role && content) {
       inputMessages.push({
         role,
         parts: [{ type: "text", content }],
       });
     }
     i++;
   }
   if (inputMessages.length > 0) {
     attributes[SpanAttributes.LLM_INPUT_MESSAGES] =
       JSON.stringify(inputMessages);
   }
 
   // Transform completions to LLM_OUTPUT_MESSAGES
   const outputMessages = [];
   let j = 0;
-  while (attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.role`]) {
-    const role = attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.role`];
-    const content =
-      attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.content`];
-    if (role && content) {
+  while (
+    attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.content`] !== undefined
+  ) {
+    const role =
+      attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.role`] || "assistant";
+    const content = attributes[`${SpanAttributes.LLM_COMPLETIONS}.${j}.content`];
+    if (content) {
       outputMessages.push({
         role,
         parts: [{ type: "text", content }],
       });
     }
     j++;
   }
   if (outputMessages.length > 0) {
     attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] =
       JSON.stringify(outputMessages);
   }
 };

923-971: Rename test to reflect that transformation, not instrumentation, creates the attributes.

Title implies the instrumentation sets these directly, but the helper populates them. Clarify to avoid confusion about coverage.

-it("should set LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES attributes for chat completions", async () => {
+it("should derive LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES from prompts/completions for chat completions", async () => {
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)

1183-1536: Good coverage for gen_ai input/output messages. Consider adding ai.prompt variant.

Tests look solid for messages, object, and tool calls. Add a small case ensuring ai.prompt (single prompt) also populates gen_ai.input.messages.

+    it("should create gen_ai.input.messages for single ai.prompt", () => {
+      const attributes: any = {
+        "ai.prompt": JSON.stringify({ prompt: "Single prompt case" }),
+      };
+      transformAiSdkAttributes(attributes);
+      const input = JSON.parse(attributes[SpanAttributes.LLM_INPUT_MESSAGES]);
+      assert.strictEqual(input.length, 1);
+      assert.strictEqual(input[0].role, "user");
+      assert.strictEqual(input[0].parts[0].type, "text");
+      assert.strictEqual(input[0].parts[0].content, "Single prompt case");
+    });
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (3)

63-76: Avoid overwriting existing gen_ai.output.messages when multiple response sources exist.

If both ai.response.text and ai.response.toolCalls are present, later transforms will clobber earlier output messages. Either append or only set when absent.

-    attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([
-      outputMessage,
-    ]);
+    if (!attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]) {
+      attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([
+        outputMessage,
+      ]);
+    }

87-99: Mirror the non-clobbering behavior for object responses.

Same rationale as text path.

-    attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([
-      outputMessage,
-    ]);
+    if (!attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]) {
+      attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([
+        outputMessage,
+      ]);
+    }

111-142: Handle non-string toolCalls input and merge with existing output messages.

AI SDKs may store ai.response.toolCalls as an array. Also, if an output message already exists (e.g., from text), merge tool_call parts instead of replacing.

-  if (AI_RESPONSE_TOOL_CALLS in attributes) {
+  if (AI_RESPONSE_TOOL_CALLS in attributes) {
     try {
-      const toolCalls = JSON.parse(
-        attributes[AI_RESPONSE_TOOL_CALLS] as string,
-      );
+      const raw = attributes[AI_RESPONSE_TOOL_CALLS];
+      const toolCalls = Array.isArray(raw) ? raw : JSON.parse(raw as string);
 
       attributes[`${SpanAttributes.LLM_COMPLETIONS}.0.role`] = ROLE_ASSISTANT;
 
       const toolCallParts: any[] = [];
       toolCalls.forEach((toolCall: any, index: number) => {
         if (toolCall.toolCallType === "function") {
           attributes[
             `${SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.${index}.name`
           ] = toolCall.toolName;
           attributes[
             `${SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.${index}.arguments`
           ] = toolCall.args;
 
           toolCallParts.push({
             type: TYPE_TOOL_CALL,
             tool_call: {
               name: toolCall.toolName,
               arguments: toolCall.args,
             },
           });
         }
       });
 
-      if (toolCallParts.length > 0) {
-        const outputMessage = {
-          role: ROLE_ASSISTANT,
-          parts: toolCallParts,
-        };
-        attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([
-          outputMessage,
-        ]);
-      }
+      if (toolCallParts.length > 0) {
+        const outputMessage = { role: ROLE_ASSISTANT, parts: toolCallParts };
+        const existing = attributes[SpanAttributes.LLM_OUTPUT_MESSAGES];
+        if (existing) {
+          try {
+            const arr = JSON.parse(existing);
+            if (Array.isArray(arr) && arr[0]?.role === ROLE_ASSISTANT) {
+              arr[0].parts = [...(arr[0].parts || []), ...toolCallParts];
+              attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify(arr);
+            }
+          } catch {
+            attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([outputMessage]);
+          }
+        } else {
+          attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([outputMessage]);
+        }
+      }
 
       delete attributes[AI_RESPONSE_TOOL_CALLS];
     } catch {
       // Ignore parsing errors
     }
   }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 768ad76 and ae63671.

📒 Files selected for processing (5)
  • packages/ai-semantic-conventions/src/SemanticAttributes.ts (1 hunks)
  • packages/instrumentation-openai/test/instrumentation.test.ts (2 hunks)
  • packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har (1 hunks)
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (10 hunks)
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
packages/ai-semantic-conventions/src/SemanticAttributes.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/instrumentation-*/**

📄 CodeRabbit inference engine (CLAUDE.md)

Place each provider integration in its own package under packages/instrumentation-[provider]/

Files:

  • packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
  • packages/instrumentation-openai/test/instrumentation.test.ts
**/recordings/**

📄 CodeRabbit inference engine (CLAUDE.md)

Store HTTP interaction recordings for tests under recordings/ directories for Polly.js replay

Files:

  • packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/instrumentation-*/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/instrumentation-*/**/*.{ts,tsx}: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK

Files:

  • packages/instrumentation-openai/test/instrumentation.test.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
  • packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/ai-semantic-conventions/src/SemanticAttributes.ts
  • packages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/instrumentation-openai/test/instrumentation.test.ts
🧬 Code graph analysis (3)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
  • transformAiSdkAttributes (370-382)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-61)
packages/instrumentation-openai/test/instrumentation.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-61)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-61)
🔇 Additional comments (4)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (4)

22-26: Replacing literals with constants is good.

Using TYPE_* and ROLE_* improves consistency and reduces typos.


154-158: Content normalization via TYPE_TEXT checks looks solid.

Good defensive handling of arrays/objects/JSON strings and preserving non-text as JSON.

Also applies to: 169-169, 183-183


262-287: Confirm intent: input messages drop non-text parts.

gen_ai.input.messages flattens mixed content to text-only parts. If preserving non-text (e.g., images) is desired later, we’ll need to carry typed parts here too. For now this aligns with tests.


301-314: Nice touch: single ai.prompt also emits standardized input messages.

Keeps LLM_PROMPTS and gen_ai.input.messages in sync.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 5cfd993 in 53 seconds. Click for details.
  • Reviewed 179 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:24
  • Draft comment:
    Removed transformToStandardFormat function; ensure the new standardized attributes are fully covered by integration tests.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure that the new standardized attributes are fully covered by integration tests. This falls under the rule of not asking the author to ensure that the change is tested, which is not allowed.
2. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:239
  • Draft comment:
    Consider asserting the exact span name instead of using a prefix match (startsWith) for more precise validation.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_fDigO34HBJONhC4S

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (3)

238-244: Make span selection deterministic.

Filter by workflow name to avoid grabbing the wrong ai.generateText span when multiple exist.

-  const aiSdkSpan = spans.find((span) => span.name.startsWith("ai.generateText"));
+  const aiSdkSpan = spans.find(
+    (span) =>
+      span.name.startsWith("ai.generateText") &&
+      span.attributes["traceloop.workflow.name"] === "test_transformations_workflow",
+  );

251-258: Harden assertions before indexing parts[0].

Assert non‑empty parts arrays to avoid undefined access if the shape changes.

   assert.strictEqual(inputMessages[0].role, "user");
   assert.ok(Array.isArray(inputMessages[0].parts));
+  assert.ok(inputMessages[0].parts.length > 0);
   assert.strictEqual(inputMessages[0].parts[0].type, "text");

268-274: Mirror safety check for output parts.

Same reasoning for assistant parts array.

   assert.strictEqual(outputMessages[0].role, "assistant");
   assert.ok(Array.isArray(outputMessages[0].parts));
+  assert.ok(outputMessages[0].parts.length > 0);
   assert.strictEqual(outputMessages[0].parts[0].type, "text");
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ae63671 and 5cfd993.

📒 Files selected for processing (2)
  • packages/traceloop-sdk/recordings/AI-SDK-Transformations_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har (1 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/traceloop-sdk/recordings/AI-SDK-Transformations_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-61)
🪛 GitHub Actions: CI
packages/traceloop-sdk/test/ai-sdk-integration.test.ts

[error] 1-1: Prettier formatting check failed (exit code 1). Code style issues found in the file. Run 'pnpm prettier --write' to fix.

🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2)

22-22: Good: using semantic attribute constants.

Importing SpanAttributes prevents string drift and aligns with conventions.


221-274: Fix CI: Prettier formatting failed — run Prettier and commit formatting fixes

File: packages/traceloop-sdk/test/ai-sdk-integration.test.ts (lines 221–274).

Verification couldn't run in the sandbox (pnpm errored: no package.json — repo wasn't cloned). Run pnpm prettier --write packages/traceloop-sdk/test/ai-sdk-integration.test.ts and pnpm prettier --check . locally and commit the formatting fixes, or re-run verification without skip_cloning.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed a68d951 in 45 seconds. Click for details.
  • Reviewed 457 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har:29
  • Draft comment:
    Ensure the HAR recording’s 'postData' structure (using the 'input' array with 'input_text') correctly simulates the expected data format for generating LLM_INPUT_MESSAGES. Verify consistency with transformation logic.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to verify consistency and ensure correctness, which violates the rule against asking the author to confirm or ensure behavior. It doesn't provide a specific suggestion or point out a clear issue.
2. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:93
  • Draft comment:
    Updated prompt to 'What is 2+2? Give a brief answer.' is consistently applied in the test validations for both LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES. The test correctly parses and asserts the expected structure.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_Y9wVFENp3aOvWuFW

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 49a4534 in 46 seconds. Click for details.
  • Reviewed 15 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:239
  • Draft comment:
    The change appears to be a reformatting of the arrow function callback. Ensure this multi-line style (with a trailing comma) is consistent with the project's style guidelines.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_uq0hiouo56VAKOse

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)

238-241: Flush before reading exporter to prevent flakiness

Force-flush spans before calling memoryExporter.getFinishedSpans(). This race has been observed earlier in this suite.

     assert.ok(result);
     assert.ok(result.text);
 
-    const spans = memoryExporter.getFinishedSpans();
+    // Ensure all spans are exported before assertions
+    await traceloop.forceFlush();
+    const spans = memoryExporter.getFinishedSpans();
     const aiSdkSpan = spans.find((span) => span.name.startsWith("ai.generateText"));
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (3)

22-22: Good: using semantic attribute constants

Importing SpanAttributes from @traceloop/ai-semantic-conventions is correct. Consider using these constants consistently (e.g., ${SpanAttributes.LLM_PROMPTS}.0.role) instead of hardcoded strings elsewhere in this file to avoid drift.


239-241: Tighten span selection to the current workflow

Filter by traceloop.workflow.name to avoid accidental matches if multiple ai.generateText spans exist.

-    const aiSdkSpan = spans.find((span) => span.name.startsWith("ai.generateText"));
+    const aiSdkSpan = spans.find(
+      (span) =>
+        span.name.startsWith("ai.generateText") &&
+        span.attributes["traceloop.workflow.name"] === "test_transformations_workflow",
+    );

244-247: Assert attribute type before JSON.parse for clearer failures

Add a quick typeof check so parse errors surface with a precise message.

-    assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES]);
+    assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES]);
+    assert.strictEqual(typeof aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES], "string");
     const inputMessages = JSON.parse(
       aiSdkSpan.attributes[SpanAttributes.LLM_INPUT_MESSAGES] as string,
     );
@@
-    assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]);
+    assert.ok(aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]);
+    assert.strictEqual(typeof aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES], "string");
     const outputMessages = JSON.parse(
       aiSdkSpan.attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] as string,
     );

Also applies to: 262-264

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5cfd993 and a68d951.

📒 Files selected for processing (2)
  • packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har (1 hunks)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/traceloop-sdk/recordings/Test-AI-SDK-Integration-with-Recording_156038438/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (4)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-61)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)

235-239: Force-flush spans before reading the exporter to avoid flaky tests.

Add an explicit flush before calling getFinishedSpans().

Apply:

     assert.ok(result);
     assert.ok(result.text);
 
-    const spans = memoryExporter.getFinishedSpans();
+    await traceloop.forceFlush();
+    const spans = memoryExporter.getFinishedSpans();
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (3)

22-22: Use SpanAttributes consistently (avoid hardcoded attribute keys).

Good import. However, elsewhere in this file assertions still use string literals like "gen_ai.system" and "gen_ai.prompt.0.role". Prefer ${SpanAttributes.LLM_SYSTEM}, ${SpanAttributes.LLM_PROMPTS}.0.role, ${SpanAttributes.LLM_COMPLETIONS}.0.content, etc., for consistency and future-proofing.

Example pattern:

assert.strictEqual(
  generateTextSpan.attributes[SpanAttributes.LLM_SYSTEM],
  "OpenAI",
);
assert.strictEqual(
  generateTextSpan.attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`],
  "user",
);

239-241: Filter the target span by workflow to ensure you assert on the right one.

Narrow the search using the workflow attribute.

Apply:

-    const aiSdkSpan = spans.find((span) =>
-      span.name.startsWith("ai.generateText"),
-    );
+    const aiSdkSpan = spans.find(
+      (span) =>
+        span.name.startsWith("ai.generateText") &&
+        span.attributes[SpanAttributes.TRACELOOP_WORKFLOW_NAME] ===
+          "test_transformations_workflow",
+    );

270-276: Strengthen validation: assert output text equals result.text.

This tightens the contract for output message content.

Apply:

     assert.strictEqual(outputMessages[0].parts[0].type, "text");
-    assert.ok(outputMessages[0].parts[0].content);
-    assert.ok(typeof outputMessages[0].parts[0].content === "string");
+    assert.ok(outputMessages[0].parts[0].content);
+    assert.ok(typeof outputMessages[0].parts[0].content === "string");
+    assert.strictEqual(outputMessages[0].parts[0].content, result.text);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a68d951 and 49a4534.

📒 Files selected for processing (1)
  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

packages/traceloop-sdk/**/*.{ts,tsx}: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk

Files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧠 Learnings (4)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk

Applied to files:

  • packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-integration.test.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
  • SpanAttributes (17-61)

@galkleinman galkleinman merged commit 4d9f995 into main Sep 17, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants