Add example script to push traces to otel endpoint#243
Add example script to push traces to otel endpoint#243darshana-v wants to merge 2 commits intomainfrom
Conversation
Summary of ChangesHello @darshana-v, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a new, self-contained example designed to guide developers in instrumenting their AI applications with OpenTelemetry. It provides a practical demonstration of how to capture and export detailed tracing information, particularly for LLM interactions, to an OTLP-compatible monitoring system, thereby enhancing observability for AI-driven services. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a valuable example for sending OpenTelemetry traces to Highflame Workbench. The review provides suggestions to enhance the example's clarity, robustness, and adherence to standard practices. Key recommendations include using standard OpenTelemetry environment variables, improving error handling, refactoring hardcoded values, and clarifying the documentation. These changes will make the example more user-friendly and maintainable.
| resource = Resource.create( | ||
| { | ||
| "service.name": os.getenv("OTEL_SERVICE_NAME", "trace-generator"), | ||
| "service.namespace": "javelin-cerberus", |
|
|
||
|
|
||
| tracer = init_tracer() | ||
| client = OpenAI(api_key=os.environ["OPENAI_API_KEY"]) |
There was a problem hiding this comment.
Accessing os.environ["OPENAI_API_KEY"] directly will raise a KeyError if the environment variable is not set, causing the script to crash. It's more user-friendly to use os.getenv() and check for the variable's existence, raising a descriptive error if it's missing.
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("The OPENAI_API_KEY environment variable is not set.")
client = OpenAI(api_key=api_key)| - Example: | ||
| ```bash | ||
| export OTEL_EXPORTER_OTLP_HEADERS="your-otel-header" | ||
| ``` |
There was a problem hiding this comment.
The example value your-otel-header for OTEL_EXPORTER_OTLP_HEADERS is a bit vague. Providing a more concrete example of the expected key=value format would be more helpful for users.
| - Example: | |
| ```bash | |
| export OTEL_EXPORTER_OTLP_HEADERS="your-otel-header" | |
| ``` | |
| - Example: | |
| ```bash | |
| export OTEL_EXPORTER_OTLP_HEADERS="Authorization=<your-token>" | |
| ``` |
| **OTLP_ENDPOINT** | ||
|
|
||
| - OTEL endpoint URL | ||
| - Example: | ||
| ```bash | ||
| export OTLP_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces" | ||
| ``` |
There was a problem hiding this comment.
The example uses a custom environment variable OTLP_ENDPOINT. To align with OpenTelemetry standards, it's better to use the standard OTEL_EXPORTER_OTLP_TRACES_ENDPOINT variable. This makes the example more familiar to users experienced with OpenTelemetry and allows for simplifying the Python script.
| **OTLP_ENDPOINT** | |
| - OTEL endpoint URL | |
| - Example: | |
| ```bash | |
| export OTLP_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces" | |
| ``` | |
| **OTEL_EXPORTER_OTLP_TRACES_ENDPOINT** | |
| - OTEL endpoint URL | |
| - Example: | |
| ```bash | |
| export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces" |
| 1. Set your environment variables: | ||
|
|
||
| ```bash | ||
| export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic%20<your-credentials>" |
There was a problem hiding this comment.
The example for OTEL_EXPORTER_OTLP_HEADERS includes %20, which is likely incorrect. The OpenTelemetry exporter does not URL-decode this value, so %20 will be sent as part of the header. A space should be used instead, with the value quoted in the shell.
| export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic%20<your-credentials>" | |
| export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-credentials>" |
| def generate_trace() -> None: | ||
| with tracer.start_as_current_span("openai.chat.completions.create") as span: | ||
| completion = client.chat.completions.create( | ||
| model="gpt-4o", |
| ) | ||
|
|
||
| span.set_attribute("llm.model", "gpt-4o") | ||
| span.set_attribute("prompt.user_question", "1 + 1 = ") |
| span.set_attribute("llm.model", "gpt-4o") | ||
| span.set_attribute("prompt.user_question", "1 + 1 = ") | ||
| span.set_attribute("response.id", completion.id) | ||
| record_completion_attributes(span, getattr(completion, "usage", {}) or {}) |
There was a problem hiding this comment.
| record_completion_attributes(span, getattr(completion, "usage", {}) or {}) | ||
|
|
||
| answer = completion.choices[0].message.content | ||
| span.set_attribute("response.preview", answer) |
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| generate_trace() No newline at end of file |
|
Review Points:
Please consider this: if a user has an AI application, what changes will they need to make to push the traces from their application to our endpoint? |
No description provided.