Skip to content

[rollout-trace] Include x-request-id in rollout trace.#20066

Merged
cassirer-openai merged 3 commits intomainfrom
cassirer/04-28/include-responses-API-request-ID-in-trace
Apr 28, 2026
Merged

[rollout-trace] Include x-request-id in rollout trace.#20066
cassirer-openai merged 3 commits intomainfrom
cassirer/04-28/include-responses-API-request-ID-in-trace

Conversation

@cassirer-openai
Copy link
Copy Markdown
Contributor

@cassirer-openai cassirer-openai commented Apr 28, 2026

Why

Rollout traces need an identifier that can be used to correlate a Codex inference with upstream Responses API, proxy, and engine logs. The reduced trace model already exposed upstream_request_id, but it was being populated from the Responses API response.id. That value is useful for previous_response_id chaining, but it is not the transport request id that upstream systems key on.

This PR separates those concepts so trace consumers can reliably answer both questions:

  • which Responses API response did this inference produce?
  • which upstream request handled it?

Structure

The change keeps the upstream request id at the same lifecycle level as the provider stream:

  • codex-api captures the x-request-id HTTP response header when the SSE stream is created and exposes it on ResponseStream. Fixture and websocket streams set the field to None because they do not have that HTTP response header.
  • codex-core carries that stream-level id into InferenceTraceAttempt when recording terminal stream outcomes. Completed, failed, cancelled, dropped-stream, and pre-response error paths all record the id when it is available.
  • rollout-trace now records both identifiers in raw terminal inference events and response payloads: response_id for the Responses API response.id, and upstream_request_id for x-request-id.
  • The reducer stores both fields on InferenceCall. It also uses response_id for previous_response_id conversation linking, which removes the old accidental dependency on the misnamed upstream_request_id field.
  • Terminal inference reduction now consumes the full terminal payload (InferenceCompleted, InferenceFailed, or InferenceCancelled) in one place. That keeps status, partial payloads, response ids, and upstream request ids consistent across success, failure, cancellation, and late stream-mapper events.

Why This Shape

x-request-id is a property of the HTTP/provider response envelope, not an SSE event. Capturing it once in codex-api and plumbing it through terminal trace recording avoids trying to infer the value from stream contents, and it preserves the id even when the stream fails or is cancelled after only partial output.

Keeping response_id separate from upstream_request_id also makes the reduced trace model less surprising: response_id remains the conversation-continuation id, while upstream_request_id is the operational correlation id for upstream debugging.

Validation

The PR updates trace and reducer coverage for:

  • reading x-request-id from SSE response headers;
  • storing the true upstream request id on completed inference calls;
  • preserving upstream request ids for cancelled and late-cancelled inference streams;
  • keeping previous_response_id reconstruction tied to response_id rather than transport request ids.

The header x-request-id holds the ID of the request in responses API. Storing this in our rollout traces allows us to more easily trace the rollout through upstream systems.

The field upstream_request_id was already present in our reduced state but it incorrectly held the response ID.
@cassirer-openai cassirer-openai force-pushed the cassirer/04-28/include-responses-API-request-ID-in-trace branch from 89a0fc4 to f57441f Compare April 28, 2026 19:11
@cassirer-openai cassirer-openai marked this pull request as ready for review April 28, 2026 19:24
@cassirer-openai cassirer-openai requested a review from a team as a code owner April 28, 2026 19:24
@cassirer-openai
Copy link
Copy Markdown
Contributor Author

@codex review

Copy link
Copy Markdown
Contributor

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f57441f28a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread codex-rs/rollout-trace/src/reducer/inference.rs
Copy link
Copy Markdown
Collaborator

@jif-oai jif-oai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beside my nit, lgtm

Comment thread codex-rs/core/src/client.rs
@cassirer-openai cassirer-openai enabled auto-merge (squash) April 28, 2026 20:23
@cassirer-openai cassirer-openai merged commit 89698ad into main Apr 28, 2026
35 of 36 checks passed
@cassirer-openai cassirer-openai deleted the cassirer/04-28/include-responses-API-request-ID-in-trace branch April 28, 2026 21:11
@github-actions github-actions Bot locked and limited conversation to collaborators Apr 28, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants