Skip to content

Conversation

@ataha322
Copy link
Contributor

@ataha322 ataha322 commented Nov 21, 2025

  • openai.request.service_tier is captured
  • openai.response.service_tier is captured
  • Supported APIs: Responses, Chat
  • Corresponding tests are added
image image
  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • [n/a] (If applicable) I have updated the documentation accordingly.

Summary by CodeRabbit

  • New Features

    • Instrumentation now captures and propagates openai.request.service_tier and openai.response.service_tier on request and response traces for improved observability.
  • Tests

    • Added automated tests and recorded fixtures validating service_tier propagation for chat and response flows (includes new trace cassettes).

✏️ Tip: You can customize this high-level summary in your review settings.

@CLAassistant
Copy link

CLAassistant commented Nov 21, 2025

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link

coderabbitai bot commented Nov 21, 2025

Walkthrough

Adds propagation of OpenAI service_tier by importing OpenAIAttributes, adding request_service_tier and response_service_tier to TracedData, populating them across sync/async/stream paths, and writing them to span attributes; includes tests and HTTP cassette fixtures validating propagation.

Changes

Cohort / File(s) Summary
Semconv import
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
Import openai_attributes as OpenAIAttributes from opentelemetry.semconv._incubating.attributes.
Response wrappers
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
Add request_service_tier: Optional[str] and response_service_tier: Optional[str] to TracedData; propagate these fields from kwargs/existing data/parsed response across sync, async, streaming, and error paths; set_data_attributes() writes OPENAI_REQUEST_SERVICE_TIER and OPENAI_RESPONSE_SERVICE_TIER span attributes.
Tests (traces)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py, packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
Add tests asserting openai.request.service_tier and openai.response.service_tier are set (example value: "priority").
Test cassettes
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml, packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml
New HTTP interaction fixtures including service_tier: "priority" in request bodies and full response metadata to exercise service_tier propagation.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant Instrumentation
  participant TracedData
  participant SpanExporter

  Client->>Instrumentation: call API (kwargs include service_tier)
  Instrumentation->>TracedData: construct traced_data (set request_service_tier from kwargs)
  Instrumentation->>Client: send HTTP request
  Client-->>Instrumentation: HTTP response (parsed, may include service_tier)
  Instrumentation->>TracedData: set response_service_tier from parsed response
  Instrumentation->>SpanExporter: set_data_attributes(traced_data) — write OPENAI_REQUEST/RESPONSE_SERVICE_TIER
  SpanExporter->>Instrumentation: span finished
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Focus review on responses_wrappers.py for all TracedData construction paths (sync, async, streaming, error).
  • Verify correct attribute keys OpenAIAttributes.OPENAI_REQUEST_SERVICE_TIER and OpenAIAttributes.OPENAI_RESPONSE_SERVICE_TIER are used and that values are propagated consistently.
  • Check tests and cassettes for duplicate insertion and correct assertion of attribute names/values.

Possibly related PRs

Suggested reviewers

  • nirga
  • galkleinman

Poem

🐰 I hopped on spans with a tiny cheer,
I tucked the tier into traces near.
Request and response now both align,
Priority shines in every line.
A rabbit’s nibble — instrumentation clear!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(openai): record service_tier attribute' accurately describes the main change—capturing service_tier attributes in OpenAI spans.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (1)

136-138: Consider the Literal type constraint for future service tier values.

The Literal["auto", "default", "flex", "scale", "priority"] constraint provides type safety but may cause issues if OpenAI introduces new service_tier values in the future. Consider whether a broader type like Optional[str] might be more maintainable, or add a comment noting that this list should be updated when OpenAI adds new tiers.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 2176fca and 5a2f29d.

📒 Files selected for processing (4)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (3 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (8 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
🧠 Learnings (1)
📚 Learning: 2025-08-17T15:06:48.109Z
Learnt from: CR
Repo: traceloop/openllmetry PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T15:06:48.109Z
Learning: Semantic conventions must follow the OpenTelemetry GenAI specification

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
🧬 Code graph analysis (4)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
  • _set_span_attribute (31-38)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (2)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
  • instrument_legacy (134-149)
  • openai_client (41-42)
packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (1)
  • get_finished_spans (40-43)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (2)
  • InMemorySpanExporter (22-61)
  • get_finished_spans (40-43)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/__init__.py (1)
  • _set_span_attribute (63-67)
🪛 Ruff (0.14.5)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py

1501-1501: Unused function argument: instrument_legacy

(ARG001)

packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py

34-34: Unused function argument: instrument_legacy

(ARG001)

🔇 Additional comments (9)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)

32-47: LGTM! Test coverage for service_tier is appropriate.

The test validates that both openai.request.service_tier and openai.response.service_tier attributes are correctly propagated through the instrumentation for the responses API.

Note: The static analysis hint about instrument_legacy being unused is a false positive—it's a pytest fixture that enables instrumentation for the test.

packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)

1500-1519: LGTM! Service tier test for chat completions is correct.

The test properly validates that service_tier propagation works for chat completions, mirroring the coverage provided in test_responses.py for the responses API.

Note: The static analysis hint about instrument_legacy is a false positive—it's a pytest fixture required for instrumentation.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (3)

145-147: Request service_tier attribute setting is correct.

The implementation properly captures the service_tier parameter from kwargs and sets it as a span attribute using the shared _set_span_attribute helper, which handles None, empty, and NOT_GIVEN values appropriately.


217-221: Response service_tier attribute setting is correct.

The implementation properly extracts service_tier from the response and sets it as a span attribute. The formatting is consistent with other response attributes in this function.


15-15: Verify OpenAIAttributes constants are accessible at runtime.

The OpenTelemetry semantic conventions define OpenAI attributes including openai.request.service_tier and openai.response.service_tier. The dependency opentelemetry-semantic-conventions >= 0.59b0 is properly declared in pyproject.toml, and the code successfully imports OpenAIAttributes alongside GenAIAttributes from the same source package.

However, the exact Python module structure for opentelemetry.semconv._incubating.attributes.openai_attributes and the presence of constants OPENAI_REQUEST_SERVICE_TIER and OPENAI_RESPONSE_SERVICE_TIER could not be confirmed from public documentation. While the constants are used in multiple locations (lines 146, 219 in shared/__init__.py and lines 197-198 in v1/responses_wrappers.py), you should verify that these constants are properly exported and accessible at runtime, particularly given they are part of the incubating (_incubating) module which may still be evolving.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (4)

197-198: Service tier span attributes are correctly set.

The implementation properly propagates both request and response service_tier values to span attributes using the appropriate OpenAIAttributes constants and the _set_span_attribute helper.


493-494: Sync path service_tier propagation is correct.

The implementation consistently handles service_tier in both error and normal execution paths:

  • Error path (lines 493-494): Captures request service_tier from kwargs and preserves response service_tier from existing_data
  • Normal path (lines 558-559): Prioritizes existing request service_tier but falls back to kwargs, and captures response service_tier from the parsed_response

Also applies to: 558-559


635-636: Async path service_tier propagation mirrors sync path correctly.

The async implementation maintains the same logic as the sync path for service_tier handling, ensuring consistency across both execution modes.

Also applies to: 701-702


809-810: ResponseStream initialization handles service_tier appropriately.

The streaming initialization correctly captures the request service_tier from request_kwargs and initializes response_service_tier to None, which will be populated when the complete response is received.

Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ataha322 - can you fix the lint and tests?

@ataha322 ataha322 force-pushed the main branch 2 times, most recently from e23bdce to b980863 Compare November 23, 2025 15:13
@ataha322
Copy link
Contributor Author

Hey @nirga, done

* `openai.request.service_tier` is captured
* `openai.response.service_tier` is captured
* Supported APIs: Responses, Chat
* Corresponding tests are added
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml (1)

1-110: Cassette captures service_tier correctly; consider scrubbing cookies/IDs if needed

The cassette correctly records a responses call with service_tier: "priority" and no API keys or auth headers. If your cassette policy treats Cloudflare cookies and OpenAI org/project identifiers as sensitive, consider scrubbing those header values in line with your other fixtures.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (2)

40-51: Remove unused Literal import

Literal is imported but never used, which Flake8 flags. It’s safe to drop it from the import list.

-from opentelemetry.semconv_ai import SpanAttributes
-from opentelemetry.semconv.attributes.error_attributes import ERROR_TYPE
-from opentelemetry.trace import SpanKind, Span, StatusCode, Tracer
-from typing import Any, Optional, Union, Literal
+from opentelemetry.semconv_ai import SpanAttributes
+from opentelemetry.semconv.attributes.error_attributes import ERROR_TYPE
+from opentelemetry.trace import SpanKind, Span, StatusCode, Tracer
+from typing import Any, Optional, Union

136-139: Streaming path doesn’t populate openai.response.service_tier yet

The new request_service_tier / response_service_tier fields on TracedData are wired through the sync and async wrappers and ultimately emitted in set_data_attributes, so non‑streaming calls correctly get both openai.request.service_tier and openai.response.service_tier.

For streaming (ResponseStream), you capture request_service_tier from request_kwargs, but _process_complete_response never sets self._traced_data.response_service_tier from the final parsed_response. As a result, openai.response.service_tier will remain unset on spans produced via streaming, even when the response includes a service tier.

Consider setting response_service_tier when you have the complete response:

     @dont_throw
     def _process_complete_response(self):
         """Process the complete response and emit span"""
         with self._cleanup_lock:
             if self._cleanup_completed:
                 return

             try:
                 if self._complete_response_data:
                     parsed_response = parse_response(self._complete_response_data)

                     self._traced_data.response_id = parsed_response.id
                     self._traced_data.response_model = parsed_response.model
                     self._traced_data.output_text = self._output_text

                     if parsed_response.usage:
                         self._traced_data.usage = parsed_response.usage

                     if parsed_response.output:
                         self._traced_data.output_blocks = {
                             block.id: block for block in parsed_response.output
                         }
+
+                    # Capture service tier from the final response, if available
+                    service_tier = getattr(parsed_response, "service_tier", None)
+                    if service_tier is None and isinstance(parsed_response, dict):
+                        service_tier = parsed_response.get("service_tier")
+                    self._traced_data.response_service_tier = service_tier

                     responses[parsed_response.id] = self._traced_data

                 set_data_attributes(self._traced_data, self._span)

This keeps streaming behavior consistent with non‑streaming calls for the new openai.response.service_tier attribute while remaining defensive for both object and dict response shapes.

Also applies to: 191-199, 493-495, 558-560, 635-637, 701-703, 793-811

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5a2f29d and b980863.

📒 Files selected for processing (6)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (3 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (8 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
**/cassettes/**/*.{yaml,yml,json}

📄 CodeRabbit inference engine (CLAUDE.md)

Never commit secrets or PII in VCR cassettes; scrub sensitive data

Files:

  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml
🧠 Learnings (1)
📚 Learning: 2025-08-17T15:06:48.109Z
Learnt from: CR
Repo: traceloop/openllmetry PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T15:06:48.109Z
Learning: Semantic conventions must follow the OpenTelemetry GenAI specification

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (8)
packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/__init__.py (1)
  • _set_span_attribute (63-67)
packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/__init__.py (1)
  • _set_span_attribute (74-78)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (1)
  • _set_span_attribute (105-109)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/custom_llm_instrumentor.py (1)
  • _set_span_attribute (71-75)
packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/wrapper.py (1)
  • _set_span_attribute (53-57)
packages/opentelemetry-instrumentation-chromadb/opentelemetry/instrumentation/chromadb/wrapper.py (1)
  • _set_span_attribute (26-30)
packages/opentelemetry-instrumentation-qdrant/opentelemetry/instrumentation/qdrant/wrapper.py (1)
  • _set_span_attribute (11-15)
packages/opentelemetry-instrumentation-weaviate/opentelemetry/instrumentation/weaviate/wrapper.py (1)
  • _set_span_attribute (26-30)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
  • instrument_legacy (134-149)
  • openai_client (41-42)
🪛 Flake8 (7.3.0)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py

[error] 50-50: 'typing.Literal' imported but unused

(F401)

🪛 Ruff (0.14.5)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py

34-34: Unused function argument: instrument_legacy

(ARG001)

packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py

1501-1501: Unused function argument: instrument_legacy

(ARG001)

🔇 Additional comments (3)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)

32-46: Nice focused coverage for Responses service_tier propagation

The test cleanly exercises the Responses path with service_tier="priority" and verifies both request/response attributes on the span; this aligns with the new semantics and mirrors existing test patterns in this file.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)

15-16: Service tier attributes are wired in cleanly via OpenAIAttributes

Importing openai_attributes as OpenAIAttributes and setting request/response service_tier in _set_request_attributes and _set_response_attributes via _set_span_attribute is consistent with the existing pattern and keeps the attribute keys aligned with the OpenTelemetry semconv definitions. Based on learnings, this preserves GenAI semantic‑convention compliance.

Also applies to: 145-147, 217-221

packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)

1500-1519: Good addition to cover Chat service_tier propagation

This test follows the existing chat patterns (using spans[-1]) and validates both request/response service_tier attributes, giving direct coverage of the new behavior for the Chat API.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b980863 and 8a8612a.

📒 Files selected for processing (6)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (3 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (8 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1 hunks)
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/init.py
🧰 Additional context used
📓 Path-based instructions (2)
**/cassettes/**/*.{yaml,yml,json}

📄 CodeRabbit inference engine (CLAUDE.md)

Never commit secrets or PII in VCR cassettes; scrub sensitive data

Files:

  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml
  • packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
  • packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
🧬 Code graph analysis (3)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
  • _set_span_attribute (31-38)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
  • instrument_legacy (134-149)
  • openai_client (41-42)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (2)
  • InMemorySpanExporter (22-61)
  • get_finished_spans (40-43)
🪛 Ruff (0.14.5)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py

1501-1501: Unused function argument: instrument_legacy

(ARG001)

packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py

34-34: Unused function argument: instrument_legacy

(ARG001)

🔇 Additional comments (9)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)

32-47: LGTM! Service tier test correctly validates attribute propagation.

The test properly exercises the service_tier parameter and verifies both request and response attributes are captured in the span.

Note: The static analysis warning about unused instrument_legacy is a false positive - this is a pytest fixture used for instrumentation setup.

packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml (1)

1-110: LGTM! Cassette properly records service tier interaction.

The cassette correctly captures the request with service_tier: "priority" and the corresponding response. No secrets or PII detected in the recording.

As per coding guidelines, cassette data appears properly scrubbed.

packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml (1)

1-113: LGTM! Chat cassette properly captures service tier.

The cassette correctly records the chat completions request with service_tier: "priority". No secrets or PII detected.

packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)

1500-1519: LGTM! Chat service tier test provides good coverage.

The test properly validates service_tier propagation for chat completions, complementing the responses test.

Note: The static analysis warning about unused instrument_legacy is a false positive - this is a pytest fixture.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (5)

45-45: LGTM! Import correctly adds OpenAI attributes.

The import follows the existing pattern and enables access to the new service tier attribute constants.


136-138: LGTM! TracedData fields properly defined.

The new service tier fields follow the established pattern for request/response attributes and use appropriate types.


197-198: LGTM! Span attributes correctly written.

The service tier attributes are properly written to the span using the OpenAI attribute constants. The _set_span_attribute helper handles None values appropriately.


493-494: LGTM! Service tier propagation handles both success and error paths.

The implementation correctly:

  • Captures request service tier from kwargs in error scenarios
  • Uses fallback logic in success path (existing_data → current request/response)
  • Handles multi-turn responses where existing_data may already contain service tier

Also applies to: 558-559


635-636: LGTM! Async wrapper maintains consistency with sync implementation.

The async wrapper correctly implements the same service tier propagation logic as the synchronous version, maintaining consistency across both code paths.

Also applies to: 701-702

@nirga nirga changed the title feat(instrumentation): record openai service_tier attribute fix(openai): record service_tier attribute Nov 23, 2025
Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @ataha322 missed a small issue - commented

span, SpanAttributes.LLM_IS_STREAMING, kwargs.get("stream") or False
)
_set_span_attribute(
span, OpenAIAttributes.OPENAI_REQUEST_SERVICE_TIER, kwargs.get("service_tier")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should guard against None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nirga Are you sure? It's already guarded in the setter:

def _set_span_attribute(span, name, value):
    if value is None or value == "":
        return

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You’re right! We don’t have it in all instrumentations unfortunately so I forgot about that here. Thanks!

@nirga nirga merged commit 589e104 into traceloop:main Nov 23, 2025
9 checks passed
@ataha322
Copy link
Contributor Author

tysm for quick cooperation!

@ataha322
Copy link
Contributor Author

@nirga could you please make it a release

@nirga
Copy link
Member

nirga commented Nov 24, 2025

Hey @ataha322 - I want to merge #3367 first as it's fixing a critical bug - will release later today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants