Skip to content

Separate SSE streaming from POST work submission#7834

Merged
jh-block merged 32 commits intomainfrom
jhugo/fix-sse-reconnect-ui
Mar 18, 2026
Merged

Separate SSE streaming from POST work submission#7834
jh-block merged 32 commits intomainfrom
jhugo/fix-sse-reconnect-ui

Conversation

@jh-block
Copy link
Collaborator

@jh-block jh-block commented Mar 12, 2026

Summary

Fixes the root cause of duplicate agent tasks when SSE connections drop. Implemented long-lived per-session SSE connections with safe Last-Event-ID reconnection, separated from work submission. The old POST /reply endpoint that both created work and streamed responses is split into three focused endpoints.

Type of Change

  • Feature
  • Refactor / Code quality
  • Bug fix

AI Assistance

  • This PR was created or reviewed with AI assistance

Testing

  • Unit tests: 4 new SessionEventBus tests (publish/subscribe, replay with Last-Event-ID, cancellation)
  • Integration: All 9 existing backend tests pass
  • TypeScript: No type errors

Manual testing needed:

  • Submit messages, verify responses stream in
  • Stop streaming, verify cancel works
  • Simulate network drop (browser devtools), verify auto-reconnection and no duplicate work
  • Elicitation flow and tool confirmations unchanged

Architecture Changes

Backend (3 new endpoints):

  • GET /sessions/{id}/events — Long-lived SSE stream with Last-Event-ID replay support
  • POST /sessions/{id}/reply — Submit work, returns immediately with request_id
  • POST /sessions/{id}/cancel — Cancel a specific request

Frontend (2 new files):

  • sessionSseClient.ts — Hand-written SSE client with auto-reconnect (exponential backoff)
  • useSessionEvents.ts — React hook wrapping the SSE client

Refactored: useChatStream.ts now uses client-generated UUIDv7 request IDs, registers SSE listeners before POSTing (no race condition), processes individual events instead of async iterables.

Related Issues

Relates to #7831 (removed retries due to buggy logic, this brings retries back with correct logic)

@jh-block jh-block marked this pull request as ready for review March 16, 2026 16:08
@jh-block jh-block force-pushed the jhugo/fix-sse-reconnect-ui branch from 5cd2811 to 8643256 Compare March 16, 2026 16:09
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5cd2811b9d

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dd09c8c3d4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2a5724914d

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6619781069

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

jh-block added 10 commits March 17, 2026 09:53
Implement long-lived per-session SSE connections with safe reconnection via
Last-Event-ID. Split the old POST /reply endpoint into:
- GET /sessions/{id}/events: Long-lived SSE stream for events
- POST /sessions/{id}/reply: Submit work with client-generated UUIDv7 request_id
- POST /sessions/{id}/cancel: Cancel a specific request

This fixes the root cause of duplicate agent tasks: reconnecting SSE clients
no longer retry the POST that created the work. Frontend now generates unique
request_ids and registers SSE listeners before posting, eliminating race
conditions. Auto-reconnect with exponential backoff replaced manual retry logic.

Fixes: Prevents SSE connection drops from creating duplicate agent work
ESLint flagged onFinish as an unnecessary dependency in handleSubmit,
submitElicitationResponse, and onMessageUpdate — it's already captured
via the submitToSession closure.
- Preserve notification request_id: use entry().or_insert_with() so
  Notification events keep their tool-call request_id instead of being
  overwritten by the chat request UUID.
- Keep heartbeat pings out of replay buffer: send SSE comments directly
  instead of publishing through the bus, preventing ping events from
  evicting real message history.
- Fix duplicate events at replay/live handoff: snapshot the buffer
  before subscribing to live events, and skip live events with
  seq <= replay_max_seq.
- Add chat_request_id field to SSE JSON so notification events (which
  carry their own MCP tool-call request_id) can still be routed to the
  correct frontend handler via the chat UUID.
- Update useSessionEvents to dispatch on chat_request_id first, falling
  back to request_id.
- Clamp replay_max_seq to the actual buffer max so a stale
  Last-Event-ID after server restart doesn't suppress all live events.
…uses

- subscribe(None) now replays all buffered events so events published
  before the SSE stream establishes are not lost.
- Add remove_event_bus() to AppState and call it on session deletion
  to prevent unbounded memory growth from accumulated replay buffers.
- Add tests for first-connect replay and stale Last-Event-ID clamping.
@jh-block jh-block force-pushed the jhugo/fix-sse-reconnect-ui branch from 6619781 to 3620e5d Compare March 17, 2026 09:53
…reams

- Reorder subscribe() to create the broadcast receiver before
  snapshotting the buffer, so no event can fall into a gap between
  the two steps. The caller deduplicates via replay_max_seq.
- Close the SSE stream on RecvError::Lagged so the client reconnects
  with Last-Event-ID and replays missed events from the buffer,
  instead of silently losing them.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 06b2d513a0

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Store the originating session ID alongside the active request ID so
that stopStreaming sends the cancel to the correct session, even if
the user navigated to a different session while streaming was active.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d93522227e

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

If the user clicks stop while the session_reply POST is still in
flight, the cancel request could arrive first and no-op on the
server. Use an AbortController to abort the fetch so the reply never
starts, then still send sessionCancel as a belt-and-suspenders measure.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6a52e0ed6a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

The backend intentionally closes SSE streams in some cases (e.g.
lagged subscribers). Previously the hook exited silently on normal
stream completion, leaving the UI stuck in streaming state. Wrap
the stream in a retry loop with exponential backoff so that both
normal EOF and errors trigger automatic reconnection.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a1d1709f9c

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Track the last event ID via the onSseEvent callback and pass it as a
Last-Event-ID header when reconnecting. This lets the server replay
only the events missed since the last received ID, preventing
duplicate Message/Notification/Finish events after a reconnect.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6868e073c9

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f52cdc7e9a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +58 to +59
// Send on broadcast channel (ignore error if no subscribers)
let _ = self.tx.send(session_event.clone());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stop replies when every session-events subscriber disconnects

With the new POST+SSE split, publish() now ignores the broadcast error when there are no subscribers, and the worker loop in crates/goose-server/src/routes/session_events.rs:458-460 never acts on that condition. If the browser never establishes /sessions/{id}/events, or the last SSE connection goes away permanently (tab close, session switch, app crash), the reply task will still run to completion with no client able to observe or cancel it, so model tokens and tool side effects can continue after the user has effectively disconnected.

Useful? React with 👍 / 👎.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a design decision, the server completes the turn and buffers the response until the client reconnects.

The generated SSE client retries internally forever unless
sseMaxRetryAttempts is set. Pass sseMaxRetryAttempts: 1 so fetch
failures immediately surface to our outer reconnect loop, which
tracks consecutive errors and fires synthetic Error events to
unblock the UI from streaming state.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 27a52e4bf7

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

When a client reconnects with a Last-Event-ID that has been evicted
from the replay buffer, the server now returns an error event
("Client too far behind") instead of silently skipping missed events.
The client detects this, exits streaming state, and reloads the full
conversation from the server so the UI reflects the actual state.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2688b30cc6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

…r screen

- Only reset backoff/consecutiveErrors after receiving a real SSE event
  (not after generator creation, since the HTTP request hasn't happened
  yet). Streams that end with no events are now treated as errors with
  proper backoff.
- Use getSession instead of updateFromSession to reload conversation
  after buffer overflow, since updateFromSession returns no body.
- Call onFinish() without an error string for the "too far behind" case
  so it doesn't trigger the blocking "Failed to Load Session" screen.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6da2476416

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Server: add ActiveRequests event variant to MessageEvent. When a new
SSE subscriber connects, the server checks for active (in-flight)
requests on the session bus and sends an ActiveRequests event listing
their IDs after replay but before live events.

Client: useSessionEvents exposes an ActiveRequests handler callback.
useChatStream registers this handler to detect in-flight requests on
mount — when ActiveRequests arrives and no local request is active, it
bootstraps an event processor and listener for the request so the UI
resumes showing the streaming reply.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 418cb63ce0

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Move the ActiveRequests event before replayed events so the client
registers listeners before buffered Message/Finish frames arrive.
Emit it without an SSE id: field so it doesn't regress the client's
Last-Event-ID cursor on reconnect.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

dispatch({
type: 'SESSION_LOADED',
payload: {
session: loadedSession!,
messages: loadedSession?.conversation || [],

P1 Badge Preserve replayed output when resumeAgent() resolves later

On a remount with an in-flight reply, useSessionEvents(sessionId) starts immediately and can replay buffered Message/Finish events before the resumeAgent() call in this effect returns. This SESSION_LOADED dispatch then replaces state.messages with the older loadedSession.conversation snapshot, so the replayed tail disappears permanently if the request finishes before any later live event re-sends it.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Terminal-event handlers in listener closures now check that
activeRequestIdRef still matches their own request ID before clearing
global refs. This prevents an older request's finish from clobbering
a newer request's listener and cancel handles when requests overlap.

onFinish no longer clears SSE-specific refs (that's now solely the
listener's responsibility), avoiding the same clobber path.

When resumeAgent() resolves after an ActiveRequests reattach has
already started processing events, only session metadata is loaded —
the stale DB message snapshot is skipped so it doesn't overwrite the
fresh streaming content from the event processor.
@DOsinga
Copy link
Collaborator

DOsinga commented Mar 18, 2026

This is creating an entirely parallel set of endpoints (/sessions/{id}/events, /sessions/{id}/reply, /sessions/{id}/cancel) that duplicate all the agent execution logic from reply.rs, same session loading, same agent setup, same event loop, same telemetry, just publishing to a bus instead of writing to an SSE response.

I would change how things work in reply.rs directly. The /sessions/{id}/ namespace is confusing because session.rs already handles session CRUD (database-backed lifecycle), and these new endpoints aren't about sessions, they're about reply streams. Instead, reply.rs should expose something like: POST /reply/stream to create a named stream (returns a stream ID, kicks off agent work, publishes to the bus), GET /reply/stream/{id} to connect/reconnect to that stream (with Last-Event-ID replay), POST /reply/stream/{id}/cancel to cancel it, and optionally GET /reply/streams to list active streams for a session.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9529590e07

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

When ActiveRequests fires before resumeAgent returns (cold mount),
defer the event processor setup until session load completes so it
starts with the full conversation history instead of an empty list.

Also clear sessionLoadError when reattaching so a prior SSE error
screen doesn't persist after connectivity recovers.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: db31581cf3

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Register a buffering listener immediately when ActiveRequests fires
on a cold mount so replayed SSE events aren't lost while waiting for
resumeAgent. doReattach replays the buffer through the real processor
once the session loads.

Also guard POST-failure cleanup with a requestId check so an older
request's error handler can't clobber a newer request's refs.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

return () => {
cancelled = true;
};

P1 Badge Reset request-tracking refs when switching sessions

When sessionId changes this cleanup only flips cancelled, so activeRequestIdRef, pendingReattachRequestIdRef, and pendingReattachBufferRef keep the previous session's state. If the user leaves session A while its reply is streaming or in cold-mount reattach and opens session B, ActiveRequests for B is ignored because activeRequestIdRef.current is still set, and resumeAgent for B can even feed A's buffered events through doReattach(), corrupting B's view with the wrong request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 309 to +310
onFinish();
return;
}
case 'ModelChange': {
break;
}
case 'UpdateConversation': {
currentMessages = event.conversation;
if (!reduceMotion) {
dispatch({ type: 'SET_MESSAGES', payload: event.conversation });
} else {
hasPendingUpdate = true;
}
break;
onReloadNeeded();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid replaying buffered chunks after overflow recovery

In the "too far behind" recovery path we reload session.conversation, but the SSE loop still reconnects and reattaches to the same active request with the overflow stream's reset cursor. The next /events response then replays buffered Message deltas into a processor initialized from the already-reloaded transcript, and pushMessage() appends same-id chunks, so a lagged reconnect can duplicate the tail of the assistant response instead of cleanly recovering.

Useful? React with 👍 / 👎.

Comment on lines +543 to +545
// Reattach to the first (most recent) active request.
// Multiple concurrent requests per session aren't supported in the UI.
const requestId = requestIds[0];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Do not treat ActiveRequests[0] as the newest reply

If two replies are in flight in one session, requestIds[0] is not actually the “most recent” request. SessionEventBus::active_request_ids() returns HashMap::keys() in arbitrary order, so after a remount this code can reattach to an older request and ignore the newer one, leaving the visible response and stop button bound to the wrong in-flight reply.

Useful? React with 👍 / 👎.

@jh-block jh-block added this pull request to the merge queue Mar 18, 2026
Merged via the queue into main with commit 8d72344 Mar 18, 2026
21 checks passed
@jh-block jh-block deleted the jhugo/fix-sse-reconnect-ui branch March 18, 2026 17:08
michaelneale added a commit that referenced this pull request Mar 19, 2026
* origin/main:
  fix(openai): use Responses API for gpt-5.4 (#7982)
  Remove lead/worker provider (#7989)
  chore(release): release version 1.28.0 (#7991)
  Fix empty tool results from resource content (e.g. auto visualiser) (#7866)
  Separate SSE streaming from POST work submission (#7834)
  fix: include token usage in Databricks streaming responses (#7959)
  Optimize tool summarization (#7938)
lifeizhou-ap added a commit that referenced this pull request Mar 20, 2026
* main: (22 commits)
  feat: add gemini-acp provider, update docs on subscription models + improvements to codex (#8000)
  fix(openai): use Responses API for gpt-5.4 (#7982)
  Remove lead/worker provider (#7989)
  chore(release): release version 1.28.0 (#7991)
  Fix empty tool results from resource content (e.g. auto visualiser) (#7866)
  Separate SSE streaming from POST work submission (#7834)
  fix: include token usage in Databricks streaming responses (#7959)
  Optimize tool summarization (#7938)
  fix: overwrite the deprecated googledrive extension config (#7974)
  refactor: remove unnecessary Arc<Mutex> from tool execution pipeline (#7979)
  Revert message flush & test (#7966)
  docs: add Remote Access section with Telegram Gateway documentation (#7955)
  fix: update webmcp blog post metadata image URL (#7967)
  fix: clean up OAuth token cache on provider deletion (#7908)
  fix: hard-coded tool call id in code mode callback (#7939)
  Fix SSE parsers to accept optional space after data: prefix (#7929)
  docs: add GOOSE_INPUT_LIMIT to config-files.md (#7961)
  Add WebMCP for Beginners blog post (#7957)
  Fix download manager (#7933)
  Improve the formatting of tool calls, show thinking, treat Reasoning and Thinking as the same thing (sorry Kant) (#7626)
  ...
elijahsgh pushed a commit to elijahsgh/goose that referenced this pull request Mar 21, 2026
Signed-off-by: esnyder <elijah.snyder1@gmail.com>
elijahsgh pushed a commit to elijahsgh/goose that referenced this pull request Mar 21, 2026
Signed-off-by: esnyder <elijah.snyder1@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants