Skip to content

Add example script to push traces to otel endpoint#243

Open
darshana-v wants to merge 2 commits intomainfrom
add/otel-example
Open

Add example script to push traces to otel endpoint#243
darshana-v wants to merge 2 commits intomainfrom
add/otel-example

Conversation

@darshana-v
Copy link
Copy Markdown
Contributor

No description provided.

@darshana-v
Copy link
Copy Markdown
Contributor Author

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @darshana-v, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a new, self-contained example designed to guide developers in instrumenting their AI applications with OpenTelemetry. It provides a practical demonstration of how to capture and export detailed tracing information, particularly for LLM interactions, to an OTLP-compatible monitoring system, thereby enhancing observability for AI-driven services.

Highlights

  • New Example for OpenTelemetry Traces: Introduced a new example demonstrating how to push OpenTelemetry traces from an AI application to an OTLP endpoint, specifically targeting Highflame Workbench for analysis.
  • LLM Operation Tracing: The example showcases how to create spans for Large Language Model (LLM) operations (using OpenAI as an example) and enrich them with custom attributes like model, prompts, responses, and token usage metrics.
  • Comprehensive Documentation: A detailed README.md file has been added, covering prerequisites, installation, environment variable configuration, usage instructions, how to view traces, customization options, and troubleshooting steps.
  • Python Script for Trace Generation: A Python script (generate_traces.py) is included that initializes an OTLP tracer, performs an OpenAI API call, and meticulously records relevant LLM interaction data as span attributes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable example for sending OpenTelemetry traces to Highflame Workbench. The review provides suggestions to enhance the example's clarity, robustness, and adherence to standard practices. Key recommendations include using standard OpenTelemetry environment variables, improving error handling, refactoring hardcoded values, and clarifying the documentation. These changes will make the example more user-friendly and maintainable.

resource = Resource.create(
{
"service.name": os.getenv("OTEL_SERVICE_NAME", "trace-generator"),
"service.namespace": "javelin-cerberus",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The service.namespace is hardcoded to "javelin-cerberus". This appears to be an internal or environment-specific value. For a general-purpose example, it's better to remove this line to make the script more broadly applicable without modification.



tracer = init_tracer()
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Accessing os.environ["OPENAI_API_KEY"] directly will raise a KeyError if the environment variable is not set, causing the script to crash. It's more user-friendly to use os.getenv() and check for the variable's existence, raising a descriptive error if it's missing.

api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
    raise ValueError("The OPENAI_API_KEY environment variable is not set.")
client = OpenAI(api_key=api_key)

Comment on lines +37 to +40
- Example:
```bash
export OTEL_EXPORTER_OTLP_HEADERS="your-otel-header"
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example value your-otel-header for OTEL_EXPORTER_OTLP_HEADERS is a bit vague. Providing a more concrete example of the expected key=value format would be more helpful for users.

Suggested change
- Example:
```bash
export OTEL_EXPORTER_OTLP_HEADERS="your-otel-header"
```
- Example:
```bash
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=<your-token>"
```

Comment on lines +52 to +58
**OTLP_ENDPOINT**

- OTEL endpoint URL
- Example:
```bash
export OTLP_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces"
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example uses a custom environment variable OTLP_ENDPOINT. To align with OpenTelemetry standards, it's better to use the standard OTEL_EXPORTER_OTLP_TRACES_ENDPOINT variable. This makes the example more familiar to users experienced with OpenTelemetry and allows for simplifying the Python script.

Suggested change
**OTLP_ENDPOINT**
- OTEL endpoint URL
- Example:
```bash
export OTLP_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces"
```
**OTEL_EXPORTER_OTLP_TRACES_ENDPOINT**
- OTEL endpoint URL
- Example:
```bash
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces"

1. Set your environment variables:

```bash
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic%20<your-credentials>"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example for OTEL_EXPORTER_OTLP_HEADERS includes %20, which is likely incorrect. The OpenTelemetry exporter does not URL-decode this value, so %20 will be sent as part of the header. A space should be used instead, with the value quoted in the shell.

Suggested change
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic%20<your-credentials>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-credentials>"

def generate_trace() -> None:
with tracer.start_as_current_span("openai.chat.completions.create") as span:
completion = client.chat.completions.create(
model="gpt-4o",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model name "gpt-4o" is hardcoded here and also when setting span attributes. To improve maintainability and avoid duplication, consider defining it as a constant at the module level and reusing it.

)

span.set_attribute("llm.model", "gpt-4o")
span.set_attribute("prompt.user_question", "1 + 1 = ")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The span attribute prompt.user_question is redundant with the input attribute set on line 66. To simplify the example and reduce confusion, it's better to remove this line and use only the input attribute for the prompt.

span.set_attribute("llm.model", "gpt-4o")
span.set_attribute("prompt.user_question", "1 + 1 = ")
span.set_attribute("response.id", completion.id)
record_completion_attributes(span, getattr(completion, "usage", {}) or {})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The expression getattr(completion, "usage", {}) or {} is a bit complex. You can make this more readable by passing completion.usage directly to record_completion_attributes. The function already handles cases where usage is None.

        record_completion_attributes(span, completion.usage)

record_completion_attributes(span, getattr(completion, "usage", {}) or {})

answer = completion.choices[0].message.content
span.set_attribute("response.preview", answer)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The span attribute response.preview is redundant with the output attribute set on line 67. To simplify the example, consider removing this line and using only the output attribute for the response content.



if __name__ == "__main__":
generate_trace() No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The file is missing a final newline character. It's a standard convention to end files with a newline.

    generate_trace()

@KunalJavelin
Copy link
Copy Markdown
Contributor

KunalJavelin commented Jan 17, 2026

Review Points:

  1. Please address Gemini comments.
  2. This example shows how to push trace to the OTEL endpoint as a standalone file, which is ok but incomplete.
  3. Please use this agent, https://github.com/highflame-ai/highflame-python/tree/main/examples/customer_support_agent, and add steps in there to push traces from a user application to our OTEL endpoint.

Please consider this: if a user has an AI application, what changes will they need to make to push the traces from their application to our endpoint?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants