-
-
Notifications
You must be signed in to change notification settings - Fork 52
Feature Request: Steer — Mid-Run Message Injection #386
Description
Summary
Add a steer mechanism to ECA that lets the user inject a new instruction into a
running agent turn between tool-call batches, without stopping the current run.
This is analogous to the steer feature in the Pi coding agent and is distinct from
the existing "queue" behaviour.
Motivation: The Problem With Queue
ECA's current behaviour when the user types while the agent is running:
- The prompt is queued (stored as a single deferred string).
- The queue is drained only after the current run fully completes.
- Multiple queued prompts are concatenated with a newline into one message.
This means if the agent is halfway through a 20-step task and you realise it has gone
in the wrong direction, you must:
- Press stop.
- Wait for the stop handshake.
- Type a corrective prompt.
- The agent resumes — but it has lost all the context of where it was mid-task.
Steer solves this: the agent finishes its current tool-call batch (so no work is
half-done), sees your new instruction as the next user message, and continues from
exactly where it was — with your correction incorporated.
What Pi Does: Reference Implementation
Conceptual model
[assistant response streams in]
→ all tool calls execute
→ ──────────────────────────── ← synchronisation point
→ poll steeringQueue
→ if messages: inject as user message → next LLM call sees correction
→ if no steer and no more tool calls → poll followUpQueue
→ if no followUp → exit
Steer = scheduled injection between turns.
The in-flight LLM stream is never aborted. Tool calls always complete. The steer
message is delivered at the next clean synchronisation point.
Pi data structures
// A steer is an ordinary user message pushed to a side queue
agent.steer({ role: "user", content: [{ type: "text", text: "Focus on X instead" }] })
// Queue drain behaviour (configurable)
steeringMode: "one-at-a-time" // default: one steer per turn boundary
steeringMode: "all" // drain entire queue at oncePi loop (simplified)
let pendingMessages = await getSteeringMessages(); // pre-flight drain
while (true) {
while (hasMoreToolCalls || pendingMessages.length > 0) {
// inject any pending steers before next LLM call
for (const msg of pendingMessages) {
context.push(msg);
emit({ type: "message_start", message: msg });
}
pendingMessages = [];
const response = await streamLLM(context); // LLM turn
toolResults = await executeToolCalls(response); // tools complete
pendingMessages = await getSteeringMessages(); // ← KEY: poll here
}
const followUps = await getFollowUpMessages();
if (followUps.length > 0) { pendingMessages = followUps; continue; }
break;
}Pi UI feedback
# Immediately when steer is called (before delivery)
queue_update event → shows "1 steering message pending" in footer
# When the steer is actually injected (after current tool batch)
message_start/message_end events → steer appears in transcript as user message
queue_update event → counter decrements to 0
AI Analysis below! 🤖
The details below were summarised with an AI Agent, please take them and the proposed implementation with a grain of salt but the feature request is something I really feel the need for! Thank you for your work on ECA! 🙏
ECA Architecture Today
Server
| Concept | Current behaviour |
|---|---|
| Queuing | None. No server-side queue exists. |
| Concurrent prompt | A new chat/prompt with the same chat-id while running supersedes the old one: a new prompt-id UUID is written atomically; the old stream's cancelled? predicate returns true on its next callback and the old future exits silently. |
| Stop | chat/promptStop → sets :stopping → kills active tool calls via stop-requested transition → finish-chat-prompt! → emits {:type :progress :state :finished}. |
| Between-turn hook | None. The prompt-messages! loop calls the LLM and executes tool calls in one uninterrupted future*. There is no poll point between turns. |
Key server code:
;; features/chat.clj — prompt-messages! → prompt loop
;; The LLM call and all tool-call handling are in one continuous future*
;; There is no synchronisation point between turns today.
(future* config
(llm-api/sync-or-async-prompt!
{:cancelled? (fn []
(or (identical? :stopping (:status chat))
(:prompt-finished? chat)
(not= prompt-id (:prompt-id chat)))) ; ← supersede check
:on-tools-called (tc/on-tools-called! ...)
...}))Client
| Concept | Current behaviour |
|---|---|
| Queue slot | Single eca-chat--queued-prompt buffer-local string. |
| Queue depth | Effectively 1 (multiple prompts are newline-joined). |
| Drain trigger | eca-chat--send-queued-prompt is called inside the ("progress" "finished") handler — i.e. only after the server confirms the prompt is fully complete. |
| Visual | "Queued: <first 40 chars>..." overlay above prompt field (italic). |
| Stop while queued | Queue is drained after stop confirms ('stopping → finished). |
;; eca-chat.el — the single queue slot
(defvar-local eca-chat--queued-prompt nil)
;; queue-prompt concatenates rather than queuing separately
(defun eca-chat--queue-prompt (prompt)
(setq-local eca-chat--queued-prompt
(if eca-chat--queued-prompt
(concat eca-chat--queued-prompt "\n" prompt)
prompt))
(eca-chat--update-queued-area)
(eca-chat--set-prompt ""))
;; drained only on "finished"
(defun eca-chat--send-queued-prompt (session)
(when eca-chat--queued-prompt
(eca-chat--send-prompt session eca-chat--queued-prompt)
(setq-local eca-chat--queued-prompt nil)
(eca-chat--update-queued-area)))Proposed Changes
Concept
Introduce a steer pathway that is distinct from queue:
| Queue (existing) | Steer (proposed) | |
|---|---|---|
| Delivery | After run fully completes | After current tool-call batch completes |
| Effect on run | Starts a new independent run | Continues the current run with injected context |
| Interrupts LLM | No (waits for finish) | No (waits for turn boundary) |
| Aborts tool calls | No | No |
| User mental model | "Send this next" | "Redirect the agent now" |
| Keybinding idea | RET (current) |
S-RET or C-RET |
1. Server changes (src/eca/features/chat.clj)
Add a steer queue to the chat DB entry:
;; db.clj — extend chat schema
:steer-queue [{:role "user" :content :string}] ; ordered list of pending steersExpose a new RPC method chat/steer:
;; handlers.clj
(defn chat-steer [{:keys [messenger db* metrics]} params]
(let [{:keys [chat-id message]} params]
;; Push onto the steer queue atomically
(swap! db* update-in [:chats chat-id :steer-queue]
(fnil conj []) {:role "user" :content message})
;; Notify client immediately so it can show feedback
(messenger/chat-content-received
messenger
{:chat-id chat-id
:role "system"
:content {:type :steerQueued :message message}})
{:chat-id chat-id}))Add a poll point between turns in prompt-messages!:
The critical change is in the on-tools-called callback — after each tool-call
batch completes, before the next LLM call, drain the steer queue:
;; features/chat.clj — inside prompt-messages! → on-tools-called callback
:on-tools-called
(fn [...]
;; Existing: check if we should continue
(let [steers (get-in @db* [:chats chat-id :steer-queue])]
(when (seq steers)
;; Drain one steer (or all, depending on mode)
(let [steer (first steers)]
(swap! db* update-in [:chats chat-id :steer-queue] rest)
;; Inject the steer message into history
(add-to-history! {:role "user" :content (:content steer)})
;; Notify client that the steer was delivered
(lifecycle/send-content! chat-ctx :user
{:type :text
:text (:content steer)
:steer? true})))
;; Continue as normal — the injected user message is now in past-messages
...))Key invariant: the steer queue is checked after on-tools-called returns,
before the next llm-api/sync-or-async-prompt! call. This ensures:
- No in-flight LLM stream is aborted
- No tool call is interrupted mid-execution
- The injected message is a first-class history entry (persisted, visible in replay)
cancelled? is unchanged — a steer does NOT supersede the current prompt-id,
so the existing stream continues normally to the next turn boundary.
2. Client changes (eca-chat.el)
New buffer-local variable:
(defvar-local eca-chat--steer-queue nil
"List of pending steer messages waiting to be injected between turns.")New eca-chat--steer function:
(defun eca-chat--steer (session prompt)
"Send PROMPT as a steer injection for the currently running SESSION.
Unlike queued prompts (delivered after the run completes), steers are
injected at the next turn boundary — after the current tool-call batch
finishes but before the next LLM call."
(push prompt eca-chat--steer-queue)
(eca-chat--update-steer-area) ; show visual feedback
(eca-chat--set-prompt "")
(eca-api-notify session
:method "chat/steer"
:params (list :chatId eca-chat--id
:message (eca-chat--normalize-prompt prompt))))Route S-RET (or C-RET) to steer when loading:
;; In eca-chat--key-pressed-return (or a new binding):
;; S-RET while loading → steer
;; RET while loading → queue (existing behaviour preserved)
((and (not (string-empty-p prompt))
eca-chat--chat-loading
(eq this-command 'eca-chat-steer-prompt)) ; new command
(eca-chat--steer session prompt))Handle steerQueued and steer delivery events:
;; In eca-chat--render-content, pcase on content type:
("steerQueued"
;; Server confirmed the steer was enqueued — update visual counter
(eca-chat--update-steer-area))
;; When the steer is delivered (appears as a user text message with :steer? t)
;; render it with a distinct face to indicate it was a steering injection
("text"
(when (plist-get content :steer?)
;; render with eca-chat-steer-face (distinct from normal user message face)
...))Visual indicator:
[current assistant response streaming...]
────────────────────────────────────────
Steering: Fix the error handling first ← eca-chat-steer-face (amber/orange)
> _ ← prompt field
When the steer is injected (consumed between turns), the indicator disappears and
the steer message appears in the transcript as a normal user turn.
3. New server RPC method registration
;; src/eca/server.clj — in the method dispatch table
"chat/steer" (handlers/chat-steer components params)Interaction With Existing Mechanisms
Steer + Queue coexistence
Both mechanisms remain independent:
RETwhile running → queue (delivered after run fully ends)S-RETwhile running → steer (delivered at next turn boundary)- If there are both steers and a queued prompt: steers fire first (mid-run);
queue fires after the run ends as today.
Steer + Stop
If the user stops the run while a steer is pending:
- The stop handshake runs as today.
- On
finished, the steer queue is cleared (the run ended before the steer
could be injected). - The existing queue is drained as today (sent as a new prompt).
- Optionally: move steer queue contents into the regular queue so they are
not lost.
Steer + Auto-compact
The existing auto-compact logic in on-tools-called checks
[:chats chat-id :compact-done?] to decide whether to compact. The steer poll
should happen after the compact check — steer is only meaningful if the run
is continuing.
Steer + Session replay
Steer messages are injected as real "user" history entries (via add-to-history!)
so they survive session save/reload and /resume replays them as ordinary user
messages. The :steer? true annotation on the content event is ephemeral (UI only)
and need not be persisted.
Summary of Files to Change
Server
| File | Change |
|---|---|
src/eca/db.clj |
Add :steer-queue to chat schema; clear it in normalize-db-for-workspace-write (like :tool-calls) |
src/eca/handlers.clj |
Add chat-steer handler |
src/eca/server.clj |
Register "chat/steer" method |
src/eca/features/chat.clj |
Poll steer queue in on-tools-called callback inside prompt-messages!; call lifecycle/send-content! with the injected user message |
src/eca/features/chat/lifecycle.clj |
No changes needed |
src/eca/messenger.clj |
No changes needed (uses existing chat-content-received) |
Client (eca-emacs)
| File | Change |
|---|---|
eca-chat.el |
Add eca-chat--steer-queue var; eca-chat--steer fn; eca-chat--update-steer-area; route S-RET to steer; handle steerQueued content type; render steer messages with distinct face; clear steer queue on stop |
eca-chat.el faces |
Add eca-chat-steer-face (distinct from user message face — amber/orange suggests "redirect") |
Tests
| File | Change |
|---|---|
test/eca/features/chat_test.clj |
Test steer queue poll; steer delivery before next LLM call; steer + stop interaction |
Open Questions for Maintainer
-
Steer mode: one-at-a-time vs. all?
Pi defaults to"one-at-a-time"(one steer per turn boundary). Should ECA
drain the whole queue at once or pace them? One-at-a-time gives the LLM a chance
to "react" to each steer before seeing the next. -
Keybinding:
S-RET,C-RET, or something else?
S-RETis natural ("shift = send now, forcefully") but may conflict in some
terminals. Could also be a dedicatedM-x eca-chat-steercommand with no default
binding, leaving the choice to the user. -
What to do with pending steers on stop?
Options: (a) discard silently, (b) move to queue so they become the next prompt,
(c) prompt the user. Option (b) seems safest — the intent was "redirect the agent",
and if the agent stopped, the next best thing is "start a new prompt with this". -
Steer vs. supersede: should a steer also rotate
prompt-id?
Currently the supersede mechanism (newchat/promptwhile running) kills the
old stream. A steer explicitly does NOT want this — it wants the current stream
to continue. The proposed implementation leavesprompt-idunchanged for steers. -
Visual distinction in transcript?
Should delivered steer messages look different from normal user messages in the
chat history? Pi renders them identically. ECA could add a small "⤷ steered"
annotation, or keep them visually identical for a cleaner transcript.
To upvote this issue, give it a thumbs up. See this list for the most upvoted issues.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status