[hooks] use a user message > developer message for prompt continuation#14867
[hooks] use a user message > developer message for prompt continuation#14867eternal-openai merged 14 commits intomainfrom
Conversation
# Conflicts: # codex-rs/app-server/src/bespoke_event_handling.rs # codex-rs/core/src/contextual_user_message.rs # codex-rs/core/src/contextual_user_message_tests.rs
af14778 to
7b8c000
Compare
| matches!( | ||
| crate::event_mapping::parse_turn_item(item), | ||
| Some(TurnItem::UserMessage(_)) | ||
| Some(TurnItem::UserMessage(_) | TurnItem::HookPrompt(_)) |
There was a problem hiding this comment.
should we also persist HookPrompt turn items for the local compaction used by non OpenAI providers here?
There was a problem hiding this comment.
I think they should be persisted in both types of compaction. For remote compaction we filter out "special" user messages like content information but hook prompts might need to be persisted because they start the turn.
|
now stop hooks are user messages that are persisted across compaction. IIRC, sessionstart + userpromptsubmit hooks are still dev messages? should those be converted as well? |
@sayan-oai not quite, I only user-message-ified the stop-block continuation prompts, not the hook runs themselves. So this is when a Stop hook returns That said, I am generally in favor of persisting hook runs as a whole, but Pavel has some larger plans for persisting metadata, so currently they are fully ephemeral |
ah this makes sense. the current distinction seems reasonable then. |
| notification.item, | ||
| ThreadItem::HookPrompt { | ||
| id: notification.item.id().to_string(), | ||
| fragments: vec![ |
There was a problem hiding this comment.
Consider content to align with user message.
There was a problem hiding this comment.
Will do in a followup
| assert_eq!(notification.turn_id, "turn-1"); | ||
| assert_eq!( | ||
| notification.item, | ||
| ThreadItem::HookPrompt { |
There was a problem hiding this comment.
Will do in a followup
| #[ts(rename_all = "camelCase", export_to = "v2/")] | ||
| pub struct HookPromptFragment { | ||
| pub text: String, | ||
| pub hook_run_id: String, |
There was a problem hiding this comment.
Do we expose this ID anywhere else?
There was a problem hiding this comment.
Not in the UIs, this is just to make the data model clean
| if hook_run_id.trim().is_empty() { | ||
| return None; | ||
| } | ||
| to_xml_string(&HookPromptXml { |
There was a problem hiding this comment.
we do not typically use real XML serialization for context injection, we don't care about value escaping, just rough boundaries.
There was a problem hiding this comment.
Hmm, I see. Probably sounds like it's worth doing in case there are xml tags in the new user prompt. I imagine that's a bit paranoid, but probably safe?
| } | ||
| to_xml_string(&HookPromptXml { | ||
| text: text.to_string(), | ||
| hook_run_id: hook_run_id.to_string(), |
There was a problem hiding this comment.
Will this end up in model context? Is it useful there?
There was a problem hiding this comment.
The idea is just to be able to link the hook run to the prompt, but if you don't think that's important, I can remove it
Summary
Persist Stop-hook continuation prompts as
usermessages instead of hiddendevelopermessages + some requested integration testsThis is a followup to @pakrym 's comment in #14532 to make sure stop-block continuation prompts match training for turn loops
<hook_prompt hook_run_id="...">stop hook's user prompt<hook_prompt>Testing
Example run (with a sessionstart hook and 3 stop hooks) - this shows context added by session start, then two stop hooks sending their own additional prompts in a new turn. The model responds with a single message addressing both. Then when that turn ends, the hooks detect that they just ran using
stop_hook_activeand decide not to infinite looptest files for this (unzip, move codex -> .codex): codex.zip