Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d55bb75d24
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| .send() | ||
| .await | ||
| .map_err(|e| format!("{e}"))?; |
There was a problem hiding this comment.
Treat non-2xx capture responses as failures
In posthog_capture, the request result is considered successful as soon as .send() returns, but reqwest treats HTTP 4xx/5xx as Ok(Response). That means invalid API keys, payload validation errors, or rate-limited responses are silently reported as success, so telemetry drop-offs won’t be surfaced and experiment metrics can be skewed without any error signal. Call .error_for_status() on the response before returning Ok(()).
Useful? React with 👍 / 👎.
| .insert_prop("total_tokens", insights.total_tokens) | ||
| .ok(); | ||
| insert(&mut props, "total_sessions", insights.total_sessions as u64); | ||
| insert(&mut props, "total_tokens", insights.total_tokens as u64); |
There was a problem hiding this comment.
Preserve signed token totals when serializing telemetry
insights.total_tokens is an i64, but this change casts it to u64 before sending. If a negative value is present (e.g., from imported/corrupted session rows, since DB/token fields are signed), the cast wraps to a huge positive integer and corrupts telemetry aggregates. Keep this field as a signed integer when inserting properties.
Useful? React with 👍 / 👎.
DOsinga
left a comment
There was a problem hiding this comment.
my agent told me this was the way to go too
* main: (270 commits) test(acp): align provider and server test parity (#7822) fix(acp): register MCP extensions when resuming a session (#7806) fix(goose): load .gitignore in prompt_manager for hint file filtering (#7795) fix: remap max_completion_tokens to max_tokens for OpenAI-compatible providers (#7765) fix(openai): preserve Responses API tool call/output linkage (#7759) chore(deps): bump @hono/node-server from 1.19.9 to 1.19.11 in /evals/open-model-gym/mcp-harness (#7687) fix: return ContextLengthExceeded when prompt exceeds effective KV cache size (#7815) feat: MCP Roots support (#7790) fix(google): use `includeThoughts/part.thought` for thinking handling (#7593) refactor: simplify tokenizer initialization — remove unnecessary Result wrapper (#7744) Fix model selector showing wrong model in tabs (#7784) Stop collecting goosed stderr after startup (#7814) fix: avoid word splitting by space for windows shell commands (#7781) (#7810) Simplify and make it not break on linux (#7813) Add preferred microphone selection (#7805) Remove dependency on posthog-rs (#7811) feat: load hints in nested subdirs (#7772) feat(acp): add read tool and delegate filesystem I/O to ACP clients (#7668) Support secret interpolation in streamable HTTP extension URLs (#7782) More logging for command injection classifier model training (#7779) ...
because of #7809