-
Notifications
You must be signed in to change notification settings - Fork 838
fix(openai): record service_tier attribute #3458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds propagation of OpenAI service_tier by importing OpenAIAttributes, adding Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Instrumentation
participant TracedData
participant SpanExporter
Client->>Instrumentation: call API (kwargs include service_tier)
Instrumentation->>TracedData: construct traced_data (set request_service_tier from kwargs)
Instrumentation->>Client: send HTTP request
Client-->>Instrumentation: HTTP response (parsed, may include service_tier)
Instrumentation->>TracedData: set response_service_tier from parsed response
Instrumentation->>SpanExporter: set_data_attributes(traced_data) — write OPENAI_REQUEST/RESPONSE_SERVICE_TIER
SpanExporter->>Instrumentation: span finished
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (1)
136-138: Consider the Literal type constraint for future service tier values.The
Literal["auto", "default", "flex", "scale", "priority"]constraint provides type safety but may cause issues if OpenAI introduces new service_tier values in the future. Consider whether a broader type likeOptional[str]might be more maintainable, or add a comment noting that this list should be updated when OpenAI adds new tiers.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py(3 hunks)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py(8 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.pypackages/opentelemetry-instrumentation-openai/tests/traces/test_chat.pypackages/opentelemetry-instrumentation-openai/tests/traces/test_responses.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
🧠 Learnings (1)
📚 Learning: 2025-08-17T15:06:48.109Z
Learnt from: CR
Repo: traceloop/openllmetry PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T15:06:48.109Z
Learning: Semantic conventions must follow the OpenTelemetry GenAI specification
Applied to files:
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
🧬 Code graph analysis (4)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
_set_span_attribute(31-38)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (2)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
instrument_legacy(134-149)openai_client(41-42)packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (1)
get_finished_spans(40-43)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (2)
InMemorySpanExporter(22-61)get_finished_spans(40-43)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/__init__.py (1)
_set_span_attribute(63-67)
🪛 Ruff (0.14.5)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
1501-1501: Unused function argument: instrument_legacy
(ARG001)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
34-34: Unused function argument: instrument_legacy
(ARG001)
🔇 Additional comments (9)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
32-47: LGTM! Test coverage for service_tier is appropriate.The test validates that both
openai.request.service_tierandopenai.response.service_tierattributes are correctly propagated through the instrumentation for the responses API.Note: The static analysis hint about
instrument_legacybeing unused is a false positive—it's a pytest fixture that enables instrumentation for the test.packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
1500-1519: LGTM! Service tier test for chat completions is correct.The test properly validates that service_tier propagation works for chat completions, mirroring the coverage provided in test_responses.py for the responses API.
Note: The static analysis hint about
instrument_legacyis a false positive—it's a pytest fixture required for instrumentation.packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (3)
145-147: Request service_tier attribute setting is correct.The implementation properly captures the
service_tierparameter from kwargs and sets it as a span attribute using the shared_set_span_attributehelper, which handles None, empty, and NOT_GIVEN values appropriately.
217-221: Response service_tier attribute setting is correct.The implementation properly extracts
service_tierfrom the response and sets it as a span attribute. The formatting is consistent with other response attributes in this function.
15-15: Verify OpenAIAttributes constants are accessible at runtime.The OpenTelemetry semantic conventions define OpenAI attributes including openai.request.service_tier and openai.response.service_tier. The dependency
opentelemetry-semantic-conventions >= 0.59b0is properly declared inpyproject.toml, and the code successfully importsOpenAIAttributesalongsideGenAIAttributesfrom the same source package.However, the exact Python module structure for
opentelemetry.semconv._incubating.attributes.openai_attributesand the presence of constantsOPENAI_REQUEST_SERVICE_TIERandOPENAI_RESPONSE_SERVICE_TIERcould not be confirmed from public documentation. While the constants are used in multiple locations (lines 146, 219 inshared/__init__.pyand lines 197-198 inv1/responses_wrappers.py), you should verify that these constants are properly exported and accessible at runtime, particularly given they are part of the incubating (_incubating) module which may still be evolving.packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (4)
197-198: Service tier span attributes are correctly set.The implementation properly propagates both request and response service_tier values to span attributes using the appropriate OpenAIAttributes constants and the
_set_span_attributehelper.
493-494: Sync path service_tier propagation is correct.The implementation consistently handles service_tier in both error and normal execution paths:
- Error path (lines 493-494): Captures request service_tier from kwargs and preserves response service_tier from existing_data
- Normal path (lines 558-559): Prioritizes existing request service_tier but falls back to kwargs, and captures response service_tier from the parsed_response
Also applies to: 558-559
635-636: Async path service_tier propagation mirrors sync path correctly.The async implementation maintains the same logic as the sync path for service_tier handling, ensuring consistency across both execution modes.
Also applies to: 701-702
809-810: ResponseStream initialization handles service_tier appropriately.The streaming initialization correctly captures the request service_tier from request_kwargs and initializes response_service_tier to None, which will be populated when the complete response is received.
nirga
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ataha322 - can you fix the lint and tests?
e23bdce to
b980863
Compare
|
Hey @nirga, done |
* `openai.request.service_tier` is captured * `openai.response.service_tier` is captured * Supported APIs: Responses, Chat * Corresponding tests are added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml (1)
1-110: Cassette captures service_tier correctly; consider scrubbing cookies/IDs if neededThe cassette correctly records a
responsescall withservice_tier: "priority"and no API keys or auth headers. If your cassette policy treats Cloudflare cookies and OpenAI org/project identifiers as sensitive, consider scrubbing those header values in line with your other fixtures.packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (2)
40-51: Remove unusedLiteralimport
Literalis imported but never used, which Flake8 flags. It’s safe to drop it from the import list.-from opentelemetry.semconv_ai import SpanAttributes -from opentelemetry.semconv.attributes.error_attributes import ERROR_TYPE -from opentelemetry.trace import SpanKind, Span, StatusCode, Tracer -from typing import Any, Optional, Union, Literal +from opentelemetry.semconv_ai import SpanAttributes +from opentelemetry.semconv.attributes.error_attributes import ERROR_TYPE +from opentelemetry.trace import SpanKind, Span, StatusCode, Tracer +from typing import Any, Optional, Union
136-139: Streaming path doesn’t populateopenai.response.service_tieryetThe new
request_service_tier/response_service_tierfields onTracedDataare wired through the sync and async wrappers and ultimately emitted inset_data_attributes, so non‑streaming calls correctly get bothopenai.request.service_tierandopenai.response.service_tier.For streaming (
ResponseStream), you capturerequest_service_tierfromrequest_kwargs, but_process_complete_responsenever setsself._traced_data.response_service_tierfrom the finalparsed_response. As a result,openai.response.service_tierwill remain unset on spans produced via streaming, even when the response includes a service tier.Consider setting
response_service_tierwhen you have the complete response:@dont_throw def _process_complete_response(self): """Process the complete response and emit span""" with self._cleanup_lock: if self._cleanup_completed: return try: if self._complete_response_data: parsed_response = parse_response(self._complete_response_data) self._traced_data.response_id = parsed_response.id self._traced_data.response_model = parsed_response.model self._traced_data.output_text = self._output_text if parsed_response.usage: self._traced_data.usage = parsed_response.usage if parsed_response.output: self._traced_data.output_blocks = { block.id: block for block in parsed_response.output } + + # Capture service tier from the final response, if available + service_tier = getattr(parsed_response, "service_tier", None) + if service_tier is None and isinstance(parsed_response, dict): + service_tier = parsed_response.get("service_tier") + self._traced_data.response_service_tier = service_tier responses[parsed_response.id] = self._traced_data set_data_attributes(self._traced_data, self._span)This keeps streaming behavior consistent with non‑streaming calls for the new
openai.response.service_tierattribute while remaining defensive for both object and dict response shapes.Also applies to: 191-199, 493-495, 558-560, 635-637, 701-703, 793-811
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py(3 hunks)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py(8 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.pypackages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
**/cassettes/**/*.{yaml,yml,json}
📄 CodeRabbit inference engine (CLAUDE.md)
Never commit secrets or PII in VCR cassettes; scrub sensitive data
Files:
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml
🧠 Learnings (1)
📚 Learning: 2025-08-17T15:06:48.109Z
Learnt from: CR
Repo: traceloop/openllmetry PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-08-17T15:06:48.109Z
Learning: Semantic conventions must follow the OpenTelemetry GenAI specification
Applied to files:
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (8)
packages/opentelemetry-instrumentation-alephalpha/opentelemetry/instrumentation/alephalpha/__init__.py (1)
_set_span_attribute(63-67)packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/__init__.py (1)
_set_span_attribute(74-78)packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/__init__.py (1)
_set_span_attribute(105-109)packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/custom_llm_instrumentor.py (1)
_set_span_attribute(71-75)packages/opentelemetry-instrumentation-milvus/opentelemetry/instrumentation/milvus/wrapper.py (1)
_set_span_attribute(53-57)packages/opentelemetry-instrumentation-chromadb/opentelemetry/instrumentation/chromadb/wrapper.py (1)
_set_span_attribute(26-30)packages/opentelemetry-instrumentation-qdrant/opentelemetry/instrumentation/qdrant/wrapper.py (1)
_set_span_attribute(11-15)packages/opentelemetry-instrumentation-weaviate/opentelemetry/instrumentation/weaviate/wrapper.py (1)
_set_span_attribute(26-30)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
instrument_legacy(134-149)openai_client(41-42)
🪛 Flake8 (7.3.0)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
[error] 50-50: 'typing.Literal' imported but unused
(F401)
🪛 Ruff (0.14.5)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
34-34: Unused function argument: instrument_legacy
(ARG001)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
1501-1501: Unused function argument: instrument_legacy
(ARG001)
🔇 Additional comments (3)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
32-46: Nice focused coverage for Responses service_tier propagationThe test cleanly exercises the Responses path with
service_tier="priority"and verifies both request/response attributes on the span; this aligns with the new semantics and mirrors existing test patterns in this file.packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
15-16: Service tier attributes are wired in cleanly via OpenAIAttributesImporting
openai_attributes as OpenAIAttributesand setting request/response service_tier in_set_request_attributesand_set_response_attributesvia_set_span_attributeis consistent with the existing pattern and keeps the attribute keys aligned with the OpenTelemetry semconv definitions. Based on learnings, this preserves GenAI semantic‑convention compliance.Also applies to: 145-147, 217-221
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
1500-1519: Good addition to cover Chat service_tier propagationThis test follows the existing chat patterns (using
spans[-1]) and validates both request/response service_tier attributes, giving direct coverage of the new behavior for the Chat API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py(3 hunks)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py(8 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py(1 hunks)packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/init.py
🧰 Additional context used
📓 Path-based instructions (2)
**/cassettes/**/*.{yaml,yml,json}
📄 CodeRabbit inference engine (CLAUDE.md)
Never commit secrets or PII in VCR cassettes; scrub sensitive data
Files:
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yamlpackages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.pypackages/opentelemetry-instrumentation-openai/tests/traces/test_chat.pypackages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
🧬 Code graph analysis (3)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py (1)
_set_span_attribute(31-38)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
packages/opentelemetry-instrumentation-openai/tests/conftest.py (2)
instrument_legacy(134-149)openai_client(41-42)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (2)
InMemorySpanExporter(22-61)get_finished_spans(40-43)
🪛 Ruff (0.14.5)
packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py
1501-1501: Unused function argument: instrument_legacy
(ARG001)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py
34-34: Unused function argument: instrument_legacy
(ARG001)
🔇 Additional comments (9)
packages/opentelemetry-instrumentation-openai/tests/traces/test_responses.py (1)
32-47: LGTM! Service tier test correctly validates attribute propagation.The test properly exercises the service_tier parameter and verifies both request and response attributes are captured in the span.
Note: The static analysis warning about unused
instrument_legacyis a false positive - this is a pytest fixture used for instrumentation setup.packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_responses/test_responses_with_service_tier.yaml (1)
1-110: LGTM! Cassette properly records service tier interaction.The cassette correctly captures the request with
service_tier: "priority"and the corresponding response. No secrets or PII detected in the recording.As per coding guidelines, cassette data appears properly scrubbed.
packages/opentelemetry-instrumentation-openai/tests/traces/cassettes/test_chat/test_chat_with_service_tier.yaml (1)
1-113: LGTM! Chat cassette properly captures service tier.The cassette correctly records the chat completions request with
service_tier: "priority". No secrets or PII detected.packages/opentelemetry-instrumentation-openai/tests/traces/test_chat.py (1)
1500-1519: LGTM! Chat service tier test provides good coverage.The test properly validates service_tier propagation for chat completions, complementing the responses test.
Note: The static analysis warning about unused
instrument_legacyis a false positive - this is a pytest fixture.packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py (5)
45-45: LGTM! Import correctly adds OpenAI attributes.The import follows the existing pattern and enables access to the new service tier attribute constants.
136-138: LGTM! TracedData fields properly defined.The new service tier fields follow the established pattern for request/response attributes and use appropriate types.
197-198: LGTM! Span attributes correctly written.The service tier attributes are properly written to the span using the OpenAI attribute constants. The
_set_span_attributehelper handles None values appropriately.
493-494: LGTM! Service tier propagation handles both success and error paths.The implementation correctly:
- Captures request service tier from kwargs in error scenarios
- Uses fallback logic in success path (existing_data → current request/response)
- Handles multi-turn responses where existing_data may already contain service tier
Also applies to: 558-559
635-636: LGTM! Async wrapper maintains consistency with sync implementation.The async wrapper correctly implements the same service tier propagation logic as the synchronous version, maintaining consistency across both code paths.
Also applies to: 701-702
...lemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
Show resolved
Hide resolved
nirga
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @ataha322 missed a small issue - commented
| span, SpanAttributes.LLM_IS_STREAMING, kwargs.get("stream") or False | ||
| ) | ||
| _set_span_attribute( | ||
| span, OpenAIAttributes.OPENAI_REQUEST_SERVICE_TIER, kwargs.get("service_tier") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should guard against None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nirga Are you sure? It's already guarded in the setter:
def _set_span_attribute(span, name, value):
if value is None or value == "":
returnThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You’re right! We don’t have it in all instrumentations unfortunately so I forgot about that here. Thanks!
|
tysm for quick cooperation! |
|
@nirga could you please make it a release |
openai.request.service_tieris capturedopenai.response.service_tieris capturedfeat(instrumentation): ...orfix(instrumentation): ....Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.