[codex-analytics] centralize thread analytics state#20241
Merged
Conversation
This was referenced Apr 29, 2026
163e5e0 to
ff2f306
Compare
93b611d to
a5f2318
Compare
ff2f306 to
54756f4
Compare
54756f4 to
4340ce1
Compare
rhan-oai
added a commit
that referenced
this pull request
Apr 29, 2026
## Why Codex analytics needs a typed seam for app-server-originated request/response traffic so future tool-approval analytics can consume those facts without adding bespoke callsite tracking each time. Server responses arrive as JSON-RPC `id + result` payloads, so analytics has to reconstruct the matching typed response from the original typed request while that request context still exists in app-server. This also puts analytics on the app-server outbound path, which needs to avoid keeping the runtime alive during shutdown. The final ownership fix keeps the normal strong auth-manager retention in analytics and makes the external-auth refresh bridge hold a weak back-reference to `OutgoingMessageSender`, breaking the runtime cycle at the bridge boundary instead of exposing retention policy through the analytics client API. ## What changed - Adds typed `ServerRequest` and `ServerResponse` analytics facts, plus `AnalyticsEventsClient::track_server_request` and `track_server_response`. - Renames the existing client-side facts to `ClientRequest` and `ClientResponse` so reducers can distinguish client-to-server traffic from server-to-client traffic. - Adds `ServerRequest::response_from_result`, allowing a stored typed request to decode the matching typed server response from a raw JSON-RPC result payload. - Threads `AnalyticsEventsClient` through `OutgoingMessageSender` and records targeted server requests, replayed targeted requests, and matching targeted responses with the responding connection id needed for correlation. - Intentionally leaves broadcast server requests/responses out of analytics for now because the current model is per connection, while broadcasts fan one logical request out across multiple connections. - Breaks the app-server shutdown cycle by storing `Weak<OutgoingMessageSender>` in `ExternalAuthRefreshBridge` and upgrading it only when an external-auth refresh is actually requested. - Keeps reducer ingestion of the new server-side facts as no-ops for now; this PR is plumbing for later tool-approval analytics work. ## Verification - `cargo test -p codex-analytics` - `cargo test -p codex-app-server outgoing_message::tests::` - Covers typed-response reconstruction plus the targeted, replayed, broadcast-exclusion, and response-attribution analytics paths. ## Follow-up This PR intentionally stops at ingestion plumbing, so `ServerRequest` and `ServerResponse` facts are still reducer no-ops. Once a follow-up PR adds real downstream analytics output for those facts: - replace the temporary pre-reducer observation seam with reducer tests for the emitted event shape; - add end-to-end coverage in `app-server/tests/suite/v2/analytics.rs` for the real app-server workflow and captured analytics payload; - remove the temporary sender-level observer tests added here in favor of the real-output coverage above. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/17088). * #18748 * #18747 * #17090 * #17089 * #20241 * #20239 * __->__ #17088
4340ce1 to
b39f92b
Compare
This was referenced Apr 29, 2026
b39f92b to
4340ce1
Compare
ebb1741 to
bc7c0d5
Compare
4340ce1 to
f1478ed
Compare
Contributor
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f1478ede76
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
f1478ed to
9412dae
Compare
9412dae to
96f46a2
Compare
bf921a0 to
c2e3246
Compare
d5bedde to
4dfe7b0
Compare
c2e3246 to
ffbe9fe
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Why
Several analytics event families need the same reducer context: the app-server client/runtime associated with a thread and, for lifecycle-oriented events, the thread metadata captured during initialization. Keeping connection ids and thread metadata in separate maps makes each new event reconstruct the same join and duplicate the same missing-state handling.
What changed
thread_connectionsandthread_metadatamaps with onethreadsmap that stores reducer-owned state for each thread.Verification
cargo test -p codex-analyticsStack created with Sapling. Best reviewed with ReviewStack.