Skip to content

feat(continuity): harden adaptive reply threading and subagent delivery continuity#40377

Closed
KimGLee wants to merge 156 commits intoopenclaw:mainfrom
KimGLee:feat/continuity-pipeline-hardening-r2
Closed

feat(continuity): harden adaptive reply threading and subagent delivery continuity#40377
KimGLee wants to merge 156 commits intoopenclaw:mainfrom
KimGLee:feat/continuity-pipeline-hardening-r2

Conversation

@KimGLee
Copy link
Copy Markdown
Contributor

@KimGLee KimGLee commented Mar 9, 2026

Summary

Implements continuity hardening proposed in #37188 across three coupled layers:

  1. Telegram reply continuity (adaptive quote-bubble behavior)
  2. Message-bound subagent completion routing (preserve reply target provenance)
  3. Long-run subagent observability and queue behavior guards

The goal is to reduce conversational ambiguity in burst chats while keeping single-turn replies clean, and to preserve deterministic delivery context when worker/subagent completion messages are emitted.

What changed

1) Adaptive Telegram reply threading for burst conversations

  • Added adaptive reply-mode resolution in src/telegram/bot-message.ts.
  • Burst keying for groups now uses chat scope (not per-sender), so alternating users in busy groups still participate in the same burst streak.
  • Added configurable adaptive behavior via channels.telegram.replyAdaptive:
    • Fixed-window mode (default, backward-compatible):
      • baseWindowMs (default 10s)
      • denseWindowMs (default 20s)
      • veryDenseWindowMs (default 25s)
      • denseShortMinCount (default 2)
      • veryDenseShortMinCount (default 4)
      • shortMessageMaxChars (default 48)
    • Scope configuration by chat type:
      • scope.private = sender (default)
      • scope.group = chat (default)
      • scope.supergroup = chat (default)
  • Added optional EMA learning mode (replyAdaptive.learning.enabled, default false):
    • Maintains per-burst-key emaGapMs + emaShortRatio
    • Computes dynamic base window:
      • base = clamp(emaGapMs * (1 + shortMessageWeight * emaShortRatio), baseMinMs, baseMaxMs)
    • Derives dense windows from multipliers:
      • dense = base * denseMultiplier
      • veryDense = base * veryDenseMultiplier
    • Defaults:
      • alphaGap=0.25, alphaShort=0.2, shortMessageWeight=0.8
      • baseMinMs=6000, baseMaxMs=30000
      • denseMultiplier=2.0, veryDenseMultiplier=2.5
  • Added/updated tests in src/telegram/bot-message.test.ts:
    • single turn => no forced threading
    • rapid consecutive turns => threading enabled
    • 10s gap resets burst

    • dense short-message expansion behavior
    • group cross-sender burst behavior
    • custom fixed-window overrides
    • EMA learning behavior

2) Subagent completion reply-target preservation

  • Threaded requester message-id across subagent announce flow so completion announcements can bind to the originating message when appropriate.
  • Propagated currentMessageId support through agent gateway request path used by announce delivery.
  • Preserved delivery provenance fields in announce calls (source session/channel/tool).
  • Kept broadcast/proactive path isolation semantics in queue/announce handling.

3) Long-run wait observability and queue-path hardening

  • Added long-run announce/wait signal tests and related queue assertions.
  • Kept subagent default max concurrent restored to 8 (matching local expected default after prior experiments).

Files touched (high-level)

  • Telegram reply behavior:
    • src/telegram/bot-message.ts
    • src/telegram/bot-message.test.ts
    • src/config/types.telegram.ts
    • src/config/zod-schema.providers-core.ts
  • Subagent announce + routing:
    • src/agents/subagent-announce.ts
    • src/agents/subagent-announce-queue.ts
    • src/agents/subagent-announce.timeout.test.ts
    • src/agents/subagent-registry.ts
    • src/agents/subagent-registry.types.ts
    • src/agents/subagent-spawn.ts
    • src/agents/tools/sessions-spawn-tool.ts
    • src/agents/openclaw-tools.ts
  • Gateway/agent request plumbing:
    • src/commands/agent.ts
    • src/commands/agent/types.ts
    • src/gateway/protocol/schema/agent.ts
    • src/gateway/server-methods/agent.ts
  • Concurrency default + queue tests:
    • src/config/agent-limits.ts
    • src/process/command-queue.test.ts

Validation

Executed:

pnpm -s vitest run src/telegram/bot-message.test.ts src/telegram/bot/delivery.test.ts src/agents/subagent-announce.timeout.test.ts

Result:

  • 3 files passed
  • 44 tests passed

Notes

@openclaw-barnacle openclaw-barnacle Bot added channel: telegram Channel integration: telegram app: macos App: macos app: web-ui App: web-ui gateway Gateway runtime commands Command implementations agents Agent runtime and tooling size: M labels Mar 9, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 9, 2026

Greptile Summary

This PR hardens three coupled delivery layers: adaptive Telegram reply threading via burst detection, currentMessageId propagation through the subagent announce/spawn path to bind completion announcements to their originating messages, and a refactored waitForSubagentCompletion that replaces a single long-lived agent.wait call with a 60-second polling loop for better observability of long-running subagent tasks.

Key changes and observations:

  • Telegram burst detection (src/telegram/bot-message.ts): Clean implementation with good test coverage. The adaptive window expansion based on short-message frequency is well-designed and properly tested.

  • waitForSubagentCompletion polling loop (src/agents/subagent-registry.ts): The loop correctly handles hard timeouts, progress logging at 15-minute intervals, and stall warnings at 20 minutes. However, the return on unexpected/null gateway responses permanently exits the perpetual poll loop — a broader blast radius than the old single-shot design, as one transient protocol anomaly can orphan a long-running run indefinitely.

  • buildAnnounceQueueKey isolation change (src/agents/subagent-announce.ts): Including requesterMessageId in the queue key changes from per-session to per-session-per-message queue isolation. The previous coalescing behavior is intentionally replaced with message-bound routing. The cron-job guard (announceType === "cron job") is type-safe thanks to the SubagentAnnounceType union.

  • Swift model updates: Both apps/macos and apps/shared GatewayModels.swift correctly add currentmessageid with the proper CodingKey mapping.

  • Test quality: All new tests are well-structured, use vi.useFakeTimers() appropriately for time-dependent scenarios, and cover the key edge cases.

Confidence Score: 3/5

  • Mostly safe to merge with awareness of the polling loop's early-exit behavior on transient gateway failures.
  • The changes are well-tested (40 tests passing across 3 files) and the core logic is sound. The score is held at 3 primarily because the new while(true) polling loop in waitForSubagentCompletion exits permanently on unexpected/null gateway responses rather than retrying with backoff. For runs configured with no hard deadline, a single transient protocol anomaly will orphan the run indefinitely. This is a larger blast radius than the previous single-shot design. The Telegram burst logic and currentMessageId propagation are well-implemented and not of concern.
  • src/agents/subagent-registry.ts — specifically the waitForSubagentCompletion polling loop's handling of unexpected/null gateway responses at lines 1241-1243.

Last reviewed commit: 239293c

Comment thread src/agents/subagent-registry.ts Outdated
Comment on lines +1241 to +1243
if (wait?.status !== "ok" && wait?.status !== "error") {
return;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unexpected status causes permanent exit in perpetual poll loop

When callGateway returns null or an unexpected status string, the condition on line 1241 is true and the entire while(true) loop exits silently via return. For runs without a hard deadline (!hasHardTimeout), this permanently abandons monitoring — the subagent run will remain stuck in a pending state with no further completion signaling.

The previous single-shot implementation had the same early-exit behavior, but its impact was bounded. In the new perpetual polling paradigm, a single transient protocol anomaly (e.g., a partial gateway response) permanently stalls the run.

Consider using continue (with a brief backoff) instead of return for this defensive guard, so transient unexpected responses don't terminate long-running waits:

      if (wait?.status !== "ok" && wait?.status !== "error") {
        // Transient or unknown response — don't permanently abandon the run.
        // Log and retry on next poll cycle instead of exiting.
        defaultRuntime.log(`[warn] Unexpected agent.wait status run=${runId} status=${wait?.status}`);
        await new Promise((resolve) => setTimeout(resolve, 5_000));
        continue;
      }
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/subagent-registry.ts
Line: 1241-1243

Comment:
**Unexpected status causes permanent exit in perpetual poll loop**

When `callGateway` returns `null` or an unexpected status string, the condition on line 1241 is `true` and the entire `while(true)` loop exits silently via `return`. For runs without a hard deadline (`!hasHardTimeout`), this permanently abandons monitoring — the subagent run will remain stuck in a pending state with no further completion signaling.

The previous single-shot implementation had the same early-exit behavior, but its impact was bounded. In the new perpetual polling paradigm, a single transient protocol anomaly (e.g., a partial gateway response) permanently stalls the run.

Consider using `continue` (with a brief backoff) instead of `return` for this defensive guard, so transient unexpected responses don't terminate long-running waits:

```ts
      if (wait?.status !== "ok" && wait?.status !== "error") {
        // Transient or unknown response — don't permanently abandon the run.
        // Log and retry on next poll cycle instead of exiting.
        defaultRuntime.log(`[warn] Unexpected agent.wait status run=${runId} status=${wait?.status}`);
        await new Promise((resolve) => setTimeout(resolve, 5_000));
        continue;
      }
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 239293c93f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/telegram/bot-message.ts Outdated
function buildReplyBurstKey(ctx: TelegramContext): string {
const msg = ctx.message;
const chatId = String(msg.chat.id);
const senderId = msg.from?.id != null ? String(msg.from.id) : "unknown";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Detect Telegram bursts at chat scope, not per sender

Including msg.from.id in the burst-state key means each participant in a group chat gets an independent streak, so alternating rapid messages from different users never reach streak >= 2 and adaptive threading stays off. In busy group conversations this removes reply bubbles exactly when disambiguation is most needed, even though the chat itself is in a burst.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — fixed in 1b82a0f49.

I changed burst-key scoping so group/supergroup chats are tracked at chat scope (not per-sender), while private chats still keep sender-specific scope. This allows alternating rapid messages from different participants to build the same burst streak and enables adaptive reply threading as intended.

Also added a regression test in src/telegram/bot-message.test.ts that simulates two rapid group messages from different sender IDs and verifies reply mode flips from off to first on the second message.

@KimGLee
Copy link
Copy Markdown
Contributor Author

KimGLee commented Mar 9, 2026

Implemented in follow-up commit f507e2ecd: configurable adaptive burst windows + optional EMA learning.

What changed

  • Added channels.telegram.replyAdaptive config surface (typed + schema validated).
  • Adaptive burst logic now supports:
    • Fixed windows (default behavior, backward-compatible)
    • EMA learning windows (opt-in)
  • Added tests for:
    • custom fixed windows behavior
    • EMA learning expanding burst eligibility
    • existing compatibility behavior retained

New config (high level)

  • replyAdaptive.enabled (default true)
  • Fixed-window params:
    • baseWindowMs (default 10000)
    • denseWindowMs (default 20000)
    • veryDenseWindowMs (default 25000)
    • denseShortMinCount (default 2)
    • veryDenseShortMinCount (default 4)
    • shortMessageMaxChars (default 48)
    • scope by chat type (private/group/supergroup: sender|chat)
  • Learning params (replyAdaptive.learning.*, default disabled):
    • enabled (default false)
    • alphaGap (default 0.25)
    • alphaShort (default 0.2)
    • shortMessageWeight (default 0.8)
    • baseMinMs / baseMaxMs (default 6000 / 30000)
    • denseMultiplier / veryDenseMultiplier (default 2.0 / 2.5)

EMA window logic (when learning enabled)

Per burst-key state keeps emaGapMs and emaShortRatio.
On each inbound message:

  1. Update EMAs:
    • emaGapMs = alphaGap * gap + (1-alphaGap) * prevEmaGapMs
    • emaShortRatio = alphaShort * isShort + (1-alphaShort) * prevEmaShortRatio
  2. Compute learned base window:
    • base = clamp( emaGapMs * (1 + shortMessageWeight * emaShortRatio), baseMinMs, baseMaxMs )
  3. Compute dense/very-dense from multipliers.
  4. Select active burst window by short-message count thresholds (same selection model as before).

Default remains fixed-window mode unless learning.enabled=true, so this is safe to roll out incrementally.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f507e2ecdc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/telegram/bot-message.ts Outdated
Comment on lines +163 to +165
if (params.configuredMode === "off" || !params.adaptiveConfig.enabled) {
return "off";
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve configured reply mode when adaptive logic is disabled

When replyAdaptive.enabled is set to false, this branch forces "off" and bypasses the configured replyToMode ("first"/"all"). That makes it impossible to disable burst adaptation without also disabling reply threading entirely, which is a behavior regression from the pre-adaptive path and contradicts the config’s role as an adaptive toggle.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch — fixed in f3be474c6.

replyAdaptive.enabled=false now preserves configured replyToMode (first/all) instead of forcing off.

I also added a regression test (preserves configured reply mode when adaptive logic is disabled) in src/telegram/bot-message.test.ts to lock this behavior.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f3be474c66

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/telegram/bot-message.ts Outdated
sweepExpiredReplyBurstState({
now,
burstState: params.burstState,
ttlMs: Math.max(veryDenseWindowMs, params.adaptiveConfig.veryDenseWindowMs),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep adaptive burst TTL independent of current chat window

When EMA learning is enabled, ttlMs is derived from the current message’s veryDenseWindowMs, so a fast-paced chat can trigger sweepExpiredReplyBurstState with a small TTL and evict state for other chats that previously learned a much larger window. In practice, a chat that should still be in-burst (e.g., learned 60–75s window) can lose its streak after ~25s as soon as another chat receives traffic, causing reply threading to flip back to off unexpectedly.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a new actionable review — fixed in d3a9441ca.

I changed burst-state sweeping to use per-key TTL (state.ttlMs) instead of a single global TTL from the current message context. This prevents traffic in one chat from evicting learned state in another chat that has a larger learned window.

Also added a regression test (does not evict another chat using a larger learned ttl) in src/telegram/bot-message.test.ts.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c7f32ff1f4

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/telegram/bot-message.ts Outdated
Comment on lines +177 to +179
const recentShortMessageAt = (previous?.recentShortMessageAt ?? []).filter(
(ts) => now - ts <= params.adaptiveConfig.veryDenseWindowMs,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use learned burst window for short-message history

When EMA learning is enabled, this pruning still uses the fixed adaptiveConfig.veryDenseWindowMs instead of the learned veryDenseWindowMs, so short-message history is truncated too aggressively once the learned window grows. In chats where learned windows expand (for example with higher baseMaxMs and multipliers), this prevents shortCount from reaching dense/very-dense thresholds and can flip adaptive reply threading back to off even though the learned burst window should still treat the conversation as active.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f68d7a7196

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/agents/subagent-announce.ts Outdated
@@ -607,6 +607,7 @@ async function sendAnnounce(item: AnnounceQueueItem) {
threadId: requesterIsSubagent ? undefined : threadId,
deliver: !requesterIsSubagent,
internalEvents: item.internalEvents,
currentMessageId: item.requesterMessageId,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Drop reply binding when batching queued announces

sendAnnounce now always forwards currentMessageId from the queued item, but in collect mode scheduleAnnounceDrain (in src/agents/subagent-announce-queue.ts) collapses multiple queued announces into one prompt by sending ...last, so mixed batches inherit only the last item’s requesterMessageId. When the same requester session accumulates queued completions from different turns before drain, the synthesized multi-item completion message is forced to reply to the wrong inbound message, which breaks the provenance this change is trying to preserve; this path should either partition by message id or clear currentMessageId for aggregated sends.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f79ef73fa6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/agents/subagent-announce-queue.ts Outdated
Comment on lines +162 to +164
items
.map((item) => item.requesterMessageId?.trim())
.filter((value): value is string => Boolean(value)),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Clear batch reply target when any item is unbound

In collect mode, this set is built only from truthy requesterMessageId values, so items without an ID are ignored. When a batch mixes a user-triggered announce (msg-...) with a cron/unbound announce (undefined), requesterMessageIds.size is still 1 and the merged prompt is sent with that message ID, incorrectly replying unrelated content to the user’s inbound message. The aggregate binding should only be kept when every queued item has the same non-empty ID.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8f11505ba4

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/agents/subagent-registry.ts Outdated
Comment on lines +1285 to +1289
if (wait?.status !== "ok" && wait?.status !== "error") {
defaultRuntime.log(
`[warn] Unexpected agent.wait status run=${runId} status=${wait?.status}`,
);
await new Promise((resolve) => setTimeout(resolve, 5_000));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop polling after run removal on unknown wait statuses

When agent.wait returns a non-ok/error status, this branch always sleeps and retries without checking whether the run is still present in subagentRuns. If the run was already completed/removed by another path (for example the lifecycle listener) and runTimeoutSeconds=0 (no hard deadline), the loop can continue indefinitely and emit warning logs every 5s. Add the same subagentRuns.has(runId) guard here before retrying so orphaned wait loops terminate.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle Bot added the cli CLI command changes label Mar 11, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d6fc860b29

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/agents/subagent-registry.ts Outdated
timeoutMs: timeoutMs + 10_000,
});

if (wait?.status === "timeout") {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Treat terminal timeout status as completed run

This branch now unconditionally treats wait.status === "timeout" as a poll timeout and loops, but agent.wait also uses "timeout" as a terminal status for aborted/timed-out runs (see gateway lifecycle snapshot handling), so timed-out subagents no longer reach completeSubagentRun. In practice this can leave completion cleanup/announcements stuck, and with runTimeoutSeconds=0 (no hard deadline) the loop can persist indefinitely instead of finalizing the run.

Useful? React with 👍 / 👎.

Comment thread src/config/types.gateway.ts Outdated
Comment on lines +189 to +190
/** Whether remote gateway mode is enabled. */
enabled?: boolean;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep gateway remote config type and validator aligned

This adds gateway.remote.enabled to the TypeScript config type, but the strict runtime schema for gateway.remote still does not accept enabled, so using this newly-declared option will fail config validation as an unknown key. That makes the advertised field unusable at runtime and introduces a type/runtime contract mismatch.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ebbf5da2d0

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 5 to 7
types: [opened, edited, labeled]
issue_comment:
types: [created]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore PR label trigger for auto-response workflow

Removing the pull_request_target labeled trigger means this workflow now runs only on issues and issue_comment, so applying PR labels like dirty, invalid, or r: too-many-prs no longer invokes the moderation paths in this script (which are gated on labeled events). In practice, maintainers can add those labels and nothing happens, so the PR auto-close/comment behavior silently stops.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ee4c22006e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +161 to +167
const requesterMessageIds = items.map((item) => item.requesterMessageId?.trim());
const hasUnboundItem = requesterMessageIds.some((value) => !value);
const nonEmptyRequesterMessageIds = new Set(
requesterMessageIds.filter((value): value is string => Boolean(value)),
);
const aggregateRequesterMessageId =
!hasUnboundItem && nonEmptyRequesterMessageIds.size === 1
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Clear reply binding when collect summary includes dropped items

In collect mode, aggregateRequesterMessageId is derived only from items, but the final prompt can also include overflowed content via summary (from droppedCount/summaryLines). When the queue cap is exceeded and dropped entries came from a different or unbound requester message, this code can still keep a single retained requesterMessageId and send a mixed-summary batch as a reply to the wrong inbound message. This misthreads unrelated completion content in high-backlog sessions; the binding should be cleared whenever summarized dropped entries are present unless their message IDs are also proven identical.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle Bot removed the cli CLI command changes label Mar 11, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 414e00bf2b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/agents/subagent-announce.ts Outdated
@@ -607,6 +607,7 @@ async function sendAnnounce(item: AnnounceQueueItem) {
threadId: requesterIsSubagent ? undefined : threadId,
deliver: !requesterIsSubagent,
internalEvents: item.internalEvents,
currentMessageId: item.requesterMessageId,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Drop reply binding for non-collect overflow summaries

sendAnnounce now always forwards item.requesterMessageId as currentMessageId, but non-collect queue modes still emit overflow summaries by reusing one queued item with a synthesized summary prompt. If the dropped items came from a different or unbound requester message, that synthesized summary inherits the surviving item's message ID and is posted as a reply to the wrong inbound message, which misthreads unrelated completion content under backlog conditions (for example mode: "followup" with dropPolicy: "summarize").

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle Bot added cli CLI command changes and removed cli CLI command changes labels Mar 13, 2026
KimGLee added 29 commits March 29, 2026 19:38
@KimGLee
Copy link
Copy Markdown
Contributor Author

KimGLee commented Mar 31, 2026

Closing this PR because the proposed continuity hardening is no longer a necessary update in the current codebase direction, and it no longer aligns well with the project's current mainline evolution. If this area becomes a priority again, it should be revisited against the latest upstream baseline rather than continued from this branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling app: macos App: macos app: web-ui App: web-ui channel: bluebubbles Channel integration: bluebubbles channel: feishu Channel integration: feishu gateway Gateway runtime size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant