Skip to content

fix(google_genai): populate response_id and model_version in streaming accumulator#5914

Closed
NIK-TIGER-BILL wants to merge 1 commit intogetsentry:masterfrom
NIK-TIGER-BILL:fix/google-genai-streaming-response-id
Closed

fix(google_genai): populate response_id and model_version in streaming accumulator#5914
NIK-TIGER-BILL wants to merge 1 commit intogetsentry:masterfrom
NIK-TIGER-BILL:fix/google-genai-streaming-response-id

Conversation

@NIK-TIGER-BILL
Copy link
Copy Markdown

Summary

Fixes #5812

Problem

accumulate_streaming_response() in sentry_sdk/integrations/google_genai/streaming.py initialises response_id and model to None and never reads them from the streaming chunks. This means streaming spans always have null for gen_ai.response.id and gen_ai.response.model.

The non-streaming code path in utils.py already captures both fields correctly:

if getattr(response, "response_id", None):
    span.set_data(SPANDATA.GEN_AI_RESPONSE_ID, response.response_id)
if getattr(response, "model_version", None):
    span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, response.model_version)

Fix

Read chunk.response_id and chunk.model_version inside the chunk-accumulation loop, keeping the first non-empty value:

if not response_id:
    response_id = getattr(chunk, "response_id", None) or None
if not model:
    model = getattr(chunk, "model_version", None) or None

This mirrors how the Gemini SDK itself exposes metadata on streaming chunks and is consistent with the non-streaming code path.

…g accumulator

In `accumulate_streaming_response()`, `response_id` and `model` are
initialised to `None` and never updated from the streaming chunks, so
streaming spans always have `null` for
`gen_ai.response.id` and `gen_ai.response.model`.

The non-streaming path in `utils.py` already captures both fields via
`getattr(response, 'response_id', None)` and
`getattr(response, 'model_version', None)`.

Fix: read those same attributes from each chunk inside the accumulation
loop and keep the first non-empty value, which is the same behaviour
the Gemini SDK itself uses for per-chunk metadata.

Fixes getsentry#5812

Signed-off-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
@sdk-maintainer-bot sdk-maintainer-bot bot added missing-maintainer-discussion Used for automated community contribution checks. violating-contribution-guidelines Used for automated community contribution checks. labels Mar 30, 2026
@sdk-maintainer-bot
Copy link
Copy Markdown

This PR has been automatically closed. The referenced issue does not show a discussion between you and a maintainer.

To avoid wasted effort on both sides, please discuss your proposed approach in the issue first and wait for a maintainer to respond before opening a PR.

Please review our contributing guidelines for more details.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 30, 2026

Semver Impact of This PR

🟢 Patch (bug fixes)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Langchain

  • Set gen_ai.operation.name and gen_ai.pipeline.name on LLM spans by ericapisani in #5849
  • Broaden AI provider detection beyond OpenAI and Anthropic by ericapisani in #5707
  • Update LLM span operation to gen_ai.generate_text by ericapisani in #5796

Bug Fixes 🐛

Ci

  • Use gh CLI to convert PR to draft by stephanie-anderson in #5874
  • Use GitHub App token for draft PR enforcement by stephanie-anderson in #5871

Google Genai

  • Populate response_id and model_version in streaming accumulator by NIK-TIGER-BILL in #5914
  • Guard response extraction by alexander-alderman-webb in #5869

Openai

  • Always set gen_ai.response.streaming for Responses by alexander-alderman-webb in #5697
  • Simplify Responses input handling by alexander-alderman-webb in #5695
  • Use max_output_tokens for Responses API by alexander-alderman-webb in #5693
  • Always set gen_ai.response.streaming for Completions by alexander-alderman-webb in #5692
  • Simplify Completions input handling by alexander-alderman-webb in #5690
  • Simplify embeddings input handling by alexander-alderman-webb in #5688

Other

  • (workflow) Fix permission issue with github app and PR draft graphql endpoint by Jeffreyhung in #5887

Documentation 📚

  • Update CONTRIBUTING.md with contribution requirements and TOC by stephanie-anderson in #5896

Internal Changes 🔧

Langchain

  • Add text completion test by alexander-alderman-webb in #5740
  • Add tool execution test by alexander-alderman-webb in #5739
  • Add basic agent test with Responses call by alexander-alderman-webb in #5726
  • Replace mocks with httpx types by alexander-alderman-webb in #5724
  • Consolidate span origin assertion by alexander-alderman-webb in #5723
  • Consolidate available tools assertion by alexander-alderman-webb in #5721

Openai

  • Replace mocks with httpx types for streaming Responses by alexander-alderman-webb in #5882
  • Replace mocks with httpx types for streaming Completions by alexander-alderman-webb in #5879
  • Move input handling code into API-specific functions by alexander-alderman-webb in #5687

Other

  • (ai) Rename generate_text to text_completion by ericapisani in #5885
  • (asyncpg) Normalize query whitespace in integration by ericapisani in #5855
  • Merge PR validation workflows and add reason-specific labels by stephanie-anderson in #5898
  • Add workflow to close unvetted non-maintainer PRs by stephanie-anderson in #5895
  • Exclude compromised litellm versions by alexander-alderman-webb in #5876
  • Reactivate litellm tests by alexander-alderman-webb in #5853
  • Add note to coordinate with assignee before PR submission by sentrivana in #5868
  • Temporarily stop running litellm tests by alexander-alderman-webb in #5851

Other

  • ci+docs: Add draft PR enforcement by stephanie-anderson in #5867

🤖 This preview updates automatically when you update the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

missing-maintainer-discussion Used for automated community contribution checks. violating-contribution-guidelines Used for automated community contribution checks.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: Streaming responses don't capture response_id or model_version

1 participant