Add safety check notification and error handling#19055
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3ef5031e9c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| #[serde(rename = "type")] | ||
| pub(crate) kind: String, | ||
| #[serde(rename = "openai_verification_recommendation")] | ||
| openai_model_verification: Option<Value>, |
There was a problem hiding this comment.
considering we fire an event already, why do we also need an extra field per ResponsesStreamEvent?
There was a problem hiding this comment.
Yep, good point. I removed the field from ResponsesStreamEvent.
| message: Option<String>, | ||
| } | ||
|
|
||
| fn cyber_policy_message_from_body(body: &str) -> Option<String> { |
There was a problem hiding this comment.
Don't we have a helper for generic error shape parsing somwehere?
There was a problem hiding this comment.
I couldn't find any that would work for this error shape. Let me know if you think I missed something.
There was a problem hiding this comment.
yeah, we parse inline for other errors in this file.
| ) | ||
| .await; | ||
|
|
||
| if let Some(warning_message) = warning_message { |
There was a problem hiding this comment.
Why do we emit both Warning and ModelVerification? We should pick one
There was a problem hiding this comment.
[codex] The two events serve different consumers: ModelVerification is the typed, app-server-visible signal that GUI clients can localize or render however they want, while Warning is the existing user-visible path the TUI renders today. The warning is intentionally not recorded into model-visible history, so it is only a client/UI notice. If you prefer, we can make the TUI render from the structured event and drop the paired warning, but that would mean adding client-side rendering behavior instead of reusing the existing warning path.
There was a problem hiding this comment.
TUI/app can render whatever message it needs based on the event
There was a problem hiding this comment.
Yep. I changed this so core only emits ModelVerification and then changed the TUI to handle it.
This dual-emit pattern exists all over the place currently. We should be able to remove a bunch of this redundancy once I do a few more turns of the crank on cleaning up the TUI.
| &explicitly_enabled_connectors, | ||
| skills_outcome, | ||
| &mut server_model_warning_emitted_for_turn, | ||
| &mut model_verification_emitted_for_turn, |
There was a problem hiding this comment.
is there something we can do to avoid carrying this state through the agent loop only to deduplicate warnings?
There was a problem hiding this comment.
Yep. This was based on the way that the existing server_model_warning_emitted_for_turn was handled, but I agree that there are cleaner ways to handle this. I've removed the new model_verification_emitted_for_turn flag and the existing flag.
| continue; | ||
| } | ||
| }; | ||
| let model_verifications = model_verifications_from_event_value(&event_value); |
There was a problem hiding this comment.
should we add metadata to ResponsesStreamEvent and conition on event.kind() == "response.metadata" ?
| .map(ToString::to_string); | ||
| let model_verifications = stream_response | ||
| .headers | ||
| .get(OPENAI_MODEL_VERIFICATION_HEADER) |
There was a problem hiding this comment.
I don't think we care about the header, we always stream.
There was a problem hiding this comment.
Yep, good point. That simplifies things. Done.
| }) | ||
| } | ||
|
|
||
| pub(crate) fn model_verifications_from_event_value( |
There was a problem hiding this comment.
this method is very permissive
There was a problem hiding this comment.
Good point. Tightened it up.
| } | ||
| } | ||
|
|
||
| fn json_value_as_model_verifications(value: &Value) -> Vec<ModelVerification> { |
There was a problem hiding this comment.
in general this parsing can be simplified I think considering we know the exact format.
| trace!("SSE event: {}", &sse.data); | ||
|
|
||
| let event: ResponsesStreamEvent = match serde_json::from_str(&sse.data) { | ||
| let event_value: Value = match serde_json::from_str(&sse.data) { |
There was a problem hiding this comment.
probably should reuse ResponsesStreamEvent below.
|
All contributors have signed the CLA ✍️ ✅ |
1103252 to
5b565f9
Compare
There was a problem hiding this comment.
💡 Codex Review
codex/codex-rs/codex-api/src/sse/responses.rs
Lines 71 to 79 in 5b565f9
spawn_response_stream extracts OpenAI-Model and other response headers but never reads OpenAI-Verification-Recommendation. On HTTP/SSE transport (including websocket fallback), that header can be the only verification signal, so ResponseEvent::ModelVerifications is not emitted and downstream model/verification notifications are silently skipped.
codex/codex-rs/codex-api/src/endpoint/responses_websocket.rs
Lines 389 to 399 in 5b565f9
connect_websocket() caches X-Reasoning-Included, X-Models-Etag, and OpenAI-Model from the handshake response, but never reads OpenAI-Verification-Recommendation. stream_request() only emits cached handshake-derived signals, so if verification is supplied only via upgrade headers, ResponseEvent::ModelVerifications is never produced.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5873bd93c0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| pub(crate) fn model_verifications(&self) -> Option<Vec<ModelVerification>> { | ||
| if self.kind() != "response.metadata" { | ||
| return None; | ||
| } | ||
|
|
||
| self.metadata | ||
| .as_ref() | ||
| .and_then(|metadata| metadata.get("openai_verification_recommendation")) | ||
| .and_then(model_verifications_from_json_value) |
There was a problem hiding this comment.
Parse verification recommendations from header metadata
model_verifications() only reads metadata.openai_verification_recommendation and requires type == "response.metadata". It never inspects response.headers/top-level headers (unlike response_model()), so header-delivered verification signals are dropped and ResponseEvent::ModelVerifications is never emitted. Fresh evidence: spawn_response_stream_ignores_model_verification_header codifies this loss path.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
This is intentional for streaming Responses traffic. The agreed carrier for HTTP streaming and websocket is the response.metadata event; OpenAI-Verification-Recommendation headers are not used on these paths. The test is codifying that streaming headers are ignored so we don’t accidentally support the wrong carrier.
Adds a new app-server notification that fires when a user account has been flagged for potential safety reasons.