chore: batch update main from gastown-staging#2130
Merged
Conversation
…tion, and bug fixes - Add Workers AI (Gemma 4 26B) to classify unresolved PR review threads as blocking vs non-blocking for auto-merge decisions. Informational comments (LGTM, bot status reports) no longer block auto-merge. - Fix mergePR to try squash/merge/rebase in order instead of hardcoding merge method (repos with squash-only policy were failing with 405). - Fix resetAgent to also zero dispatch_attempts so agents recover immediately after container evictions instead of being stuck in exponential backoff. - Fix code_review=false bypass: fast-track ALL open MR beads (not just those with pr_url) to prevent the refinery from being dispatched for code review when code_review is disabled. - Fix cross-tick race in pr_feedback_detected: re-verify PR is still open before creating feedback beads to prevent duplicate PRs on merged branches. - Add AI binding to wrangler.jsonc for both production and dev environments. - Add diagnostic logging for poll_pr auto-merge flow (allGreen, readySince, elapsed/delay, convoy dispatch target branch). - Update local-debug-testing.md with Workers AI documentation.
Moves PR status checking and PR feedback analysis from TownDO into a new town-scm submodule. Instruments the Workers AI call for checking review threads with a new api.external_request analytics event.
* fix(gastown): filter closed beads from re-escalation query Add status != 'closed' filter to reEscalateStaleEscalations query to prevent phantom Re-Escalation messages for already-acknowledged (and thus closed) escalation beads. Fixes #2123 * fix(gastown): exclude failed beads from re-escalation query Exclude beads with status='failed' in addition to 'closed' from the re-escalation query, preventing phantom Re-Escalation messages for already-failed escalation beads. --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
* feat(gastown): isolate agent SQLite DB via KILO_TEST_HOME (#2094) Adds KILO_TEST_HOME env var to buildAgentEnv() to ensure @kilocode/sdk isolates the kilo.db file per agent instead of sharing the container's default path. Co-authored-by: Shadow-polecat-d1e0e21b@5f5fda7f Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add AGENT_DB_SNAPSHOTS_KV binding for agent DB snapshots (#2096) * feat(gastown): add AGENT_DB_SNAPSHOTS_KV binding for agent DB snapshots - Add KV namespace binding to wrangler.jsonc - Add AGENT_DB_SNAPSHOTS_KV: KVNamespace to Env interface in worker-configuration.d.ts (both DevEnv and Env) * fix(gastown): set AGENT_DB_SNAPSHOTS_KV id to empty string The placeholder id '<your-kv-namespace-id>' would break deployments. Set to empty string to allow wrangler to provision the namespace. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add process registry RPC to TownContainerDO (#2098) Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add container-registry and db-snapshot worker endpoints Add GET/POST /api/towns/:townId/container-registry proxied to TownContainerDO.getRegistry() and .updateRegistry(). Add GET/POST /api/towns/:townId/rigs/:rigId/agents/:agentId/db-snapshot backed by AGENT_DB_SNAPSHOTS_KV. BEAD=fca2ee4e * feat(gastown): pass GASTOWN_TOWN_ID to container on provision (#2099) Ensures the container knows its town identity on cold boot by reading process.env.GASTOWN_TOWN_ID. Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): hydrate DB on startAgent and resume agents on boot (#2103) * feat(gastown): hydrate DB on startAgent and resume agents on boot - Add hydrateDbFromSnapshot() that fetches the agent's DB from KV and writes it to /tmp/agent-home-<agentId>/.local/share/kilo/kilo.db - Call hydrateDbFromSnapshot() in startAgent() before ensureSDKServer() - Add saveDbSnapshot() that reads kilo.db and POSTs it to the worker KV - Call saveDbSnapshot() on stopAgent(), exitAgent(), drainAll() stragglers, and stopAll() - Add bootHydration() that fetches the container registry and resumes all registered agents - Call bootHydration() from main.ts after control server startup * fix(gastown): move container-registry and db-snapshot routes before kiloAuthMiddleware Container-token requests to /container-registry and /db-snapshot were returning 401 Unauthorized because these routes were registered after the kiloAuthMiddleware wildcard. Moved them before that middleware so they are protected by authMiddleware instead, which accepts container JWTs. Fixes PR #2103 review comments from kilo-code-bot. * fix(gastown): use authMiddleware for container-registry and db-snapshot routes --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix: skip kiloAuthMiddleware for container-registry routes; add placeholder KV namespace id - Container-registry routes use authMiddleware which accepts container JWTs, but the global /api/towns/:townId/* middleware was also applying kiloAuthMiddleware which rejected container tokens. Now the global middleware skips container-registry. - Changed AGENT_DB_SNAPSHOTS_KV namespace id from empty string to 'placeholder' to indicate it requires configuration at deploy time. --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
Contributor
Code Review SummaryStatus: 2 Issues Found | Recommendation: Address before merge Overview
Fix these issues in Kilo Cloud Issue Details (click to expand)No new issues found on the changed lines in this incremental review. Other Observations (not in diff)Issues found in unchanged code that cannot receive inline comments: WARNING
Files Reviewed (1 file)
Reviewed by gpt-5.4-2026-03-05 · 210,273 tokens |
…ing docs - Replace placeholder KV namespace ID with real ID (5ffb8f362e7b4d869fe2f48293a9f0c2) in both top-level and dev env wrangler config - Add kv_namespaces to dev env (wrangler doesn't inherit from top-level when env-specific bindings are declared) - Add Section 8: KV-backed agent session persistence testing guide (container registry, db snapshots, boot hydration, drain snapshots) - Add Section 9: Re-escalation filtering verification guide
…umping SDK
- Set XDG_DATA_HOME in buildAgentEnv() so the kilo CLI writes kilo.db
to the same path that saveDbSnapshot/hydrateDbFromSnapshot use
(/tmp/agent-home-{agentId}/.local/share/kilo/kilo.db). Previously
KILO_TEST_HOME was set but only affects Global.Path.home, not the
XDG data directory where kilo.db lives.
- Bump @kilocode/sdk and @kilocode/plugin from 7.0.37 to 7.1.23
- Bump container plugin SDK deps from ^1.0.23 to 7.1.23
…orten drain idle timers
Three bugs fixed:
1. Container registry never written — agents weren't persisted to the
registry on start/stop, so bootHydration always found an empty
registry after container eviction. Added syncRegistry() that
serializes the running agents Map to the TownContainerDO via POST
/container-registry, called on agent start, exit, stop, and failure.
Added startupRequest field to ManagedAgent to preserve the original
StartAgentRequest for registry serialization.
2. Refinery dispatched despite code_review=false — Rules 5-6 in the
reconciler were wrapped in a bare block { } instead of
if (refineryCodeReview) { }. The fast-track code appended
transition_bead actions but hadn't mutated the DB yet, so Rules 5-6
re-queried and still saw MR beads as open, dispatching the refinery.
Changed the bare block to a proper if guard.
3. Drain waits 120s-600s for already-idle agents — agents that received
session.idle before drain started had long idle timers pending.
drainAll() now replaces those with 10s timers in Phase 1b by storing
the onExit callback alongside each idle timer.
…eset, conversation history - Fix db-snapshot routes hitting kiloAuthMiddleware: add '/db-snapshot' to the skip condition alongside '/container-registry' so container JWT auth works correctly for snapshot endpoints. - Clear container registry at end of drainAll() so bootHydration on the next container doesn't resurrect force-saved agents. - Reset bead dispatch_attempts and last_dispatch_attempt_at in resetAgent so the reconciler doesn't skip the bead due to accumulated cooldown. - Remove legacy conversation history injection from mayor dispatch paths (sendMayorMessage, _ensureMayor, updateMayorModel) — kilo.db persistence now handles session continuity across evictions.
…essions Two fixes for agent session persistence across container evictions: 1. WAL checkpoint before snapshot: SQLite in WAL mode stores recent writes in -wal/-shm files. saveDbSnapshot now runs PRAGMA wal_checkpoint(TRUNCATE) via bun:sqlite subprocess to merge the WAL into the main .db file before reading it. Without this, the snapshot was missing recent session data (messages, parts, etc). 2. Resume existing sessions: startAgent now calls session.list() after hydrating the DB and starting the SDK server. If sessions exist from the hydrated kilo.db, it resumes the most recently updated one instead of always creating a new session via session.create().
Resolved 4 conflicts: - docs/local-debug-testing.md: kept our new sections 8-9 - Town.do.ts: removed duplicate SCM methods that were already extracted to town-scm.ts (resolveGitHubToken, checkPRStatus, checkPRFeedback, areThreadsBlocking, mergePR) - worker-configuration.d.ts: kept AGENT_DB_SNAPSHOTS_KV binding - wrangler.jsonc: kept kv_namespaces in both top-level and dev env
Non-mayor agents (polecats, refineries, triage) always get fresh sessions since they work on a new bead each dispatch. The session resume logic was applying to all agents, causing polecats to inherit stale sessions from the hydrated kilo.db instead of starting clean.
When a town is in idle alarm cadence (5 min interval), creating beads or convoys via the mayor didn't wake it up. armAlarmIfNeeded() only sets an alarm if none is scheduled, so it's a no-op when the idle alarm is already set minutes in the future. Added escalateToActiveCadence() which unconditionally reschedules the alarm to fire in 5s, and call it from work-creation paths: - slingBead (mayor creates a bead) - slingConvoy (mayor creates a convoy, non-staged) - startConvoy (staged convoy transitions to active) - submitToReviewQueue (polecat submits work, creates MR bead) - requestChanges (creates rework bead) Lifecycle paths (initialize, configureRig, heartbeat, agentDone, agentCompleted, sendMayorMessage) keep using armAlarmIfNeeded() since they shouldn't clobber an active alarm schedule.
…n code_review=false When code_review=false, the fast-track blindly moved ALL open MR beads to in_progress, including convoy review-and-merge beads that need the refinery. Rules 5-6 were then gated behind refineryCodeReview, so the refinery was never dispatched for these beads — they got stuck in in_progress with no assignee. Two changes: 1. Fast-track now excludes MR beads belonging to a review-and-merge convoy (checked via parent_bead_id → convoy_metadata.merge_mode). 2. Rules 5-6 block is no longer gated behind refineryCodeReview — it runs unconditionally but naturally only finds open MR beads, which after the fast-track are only convoy review-and-merge beads when code_review=false.
…arvation, refinery filter 1. updateAgentModel now resumes existing mayor sessions (via session.list) instead of always creating new ones, matching the startAgent fix. Model swaps no longer lose conversation history. 2. escalateToActiveCadence now only shortens alarms — it checks whether the current alarm is already nearer than ACTIVE_ALARM_INTERVAL before overwriting. Prevents reconciler starvation during bursts of work creation. 3. Rules 5-6 refinery dispatch now applies a convoy-only filter when code_review=false. Because reconciliation emits actions without mutating SQL, the fast-track's transition_bead actions haven't been applied yet, so ordinary PR beads are still 'open' in the DB. Without the filter, Rules 5-6 would re-dispatch the refinery for beads that should be skipped.
Both startAgent and updateAgentModel were sending the startup prompt into resumed sessions, creating duplicate turns. Now the initial session.prompt() call is skipped when a session was resumed from the hydrated kilo.db — the conversation history is already there.
gt:pr-feedback beads (address review comments, fix CI) were falling through to the default submitToReviewQueue path in agentDone, creating a redundant MR bead. The polecat pushes to the existing PR branch — no new PR or MR bead is needed. Added a gt:pr-feedback handler that mirrors the existing gt:rework and gt:pr-fixup patterns: close the feedback bead directly, which unblocks the parent MR bead so poll_pr can re-check CI or the reconciler can re-dispatch the refinery for re-review.
…n settings page (#2195) * feat(gastown): Add Small Model configuration field to user-facing town settings page * fix: Add clear button to Small Model field --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
…GUPP triggers When an idle agent is re-dispatched, dispatchAgent updates last_activity_at but leaves last_event_at with the timestamp from the previous session's last SDK event (potentially hours/days old). The GUPP patrol uses last_event_at as the primary activity signal, so it would see the agent as unresponsive for >2h and immediately force-stop it — even though the agent was just dispatched 5 seconds ago. Fix: NULL out last_event_at and last_event_type in the dispatch UPDATE so the GUPP falls back to last_activity_at (freshly set) until new SDK events arrive from the current session.
When a dispatch fails (container didn't start, OOM, etc.), agentCompleted sets the agent to idle but doesn't always unhook it. This leaves the agent idle+hooked to a live bead — a dead-end state where GUPP doesn't target it (not working), scheduling doesn't pick up the bead (already in_progress with an assignee), and the agent can't take new work (has a hook). Added a reconciler rule in reconcileAgents: idle agents hooked to open/in_progress beads get unhooked, and the bead is reset to open so scheduling can re-dispatch.
The code_review=false fast-track was transitioning ALL open MR beads to in_progress, including those without a pr_url. MR beads without a PR have no poll_pr target, so Rule 2 would detect them as stuck and reset to open — causing an open→in_progress→open oscillation every reconciler tick. Now the fast-track JOIN's review_metadata and only matches beads where pr_url IS NOT NULL. MR beads without a PR stay open so Rules 5-6 can dispatch the refinery to create the PR.
… code_review=false The convoyOnlyFilter was too restrictive — when code_review=false, it only allowed Rules 5-6 to dispatch the refinery for convoy review-and-merge beads. MR beads without a pr_url (where the polecat didn't create a PR) were excluded, leaving them stuck in 'open' with no one to create the PR. Renamed to refineryNeededFilter with an OR condition: dispatch the refinery when the MR bead has no pr_url (needs PR creation) OR when it belongs to a review-and-merge convoy. MR beads WITH a pr_url are handled by the fast-track → poll_pr pipeline as before.
…false When code_review=false and merge_strategy=pr, the polecat is responsible for creating the PR. If it doesn't provide a pr_url in gt_done, the MR bead is orphaned: no PR to poll, and the refinery shouldn't be dispatched (code review disabled). Previously this left the MR bead stuck in 'open' forever. Now the reconciler fails orphaned MR beads (no pr_url, not convoy review-and-merge) and reopens the source bead so a polecat can retry. Also reverted the refineryNeededFilter to only match convoy review-and-merge beads — MR beads without pr_url don't need the refinery when code review is disabled.
pandemicsyn
approved these changes
Apr 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This batch PR merges recent feature and bugfix work from
gastown-stagingintomain.Original constituent PRs
KILO_TEST_HOMEandXDG_DATA_HOME.AGENT_DB_SNAPSHOTS_KVbinding for agent DB snapshots with WAL checkpoint before save.startAgentand resumes mayor sessions on boot, allowing state to survive container evictions.TownContainerDOand passesGASTOWN_TOWN_IDto container on provision.syncRegistry()persists running agents to the container registry on start/stop/exit/failure.TownDOinto a newtown-scmsubmodule.api.external_requestanalytics.mergePRto try squash/merge/rebase gracefully.resetAgentto zero both agent and beaddispatch_attemptsfor immediate recovery after evictions.pr_feedback_detectedto prevent duplicate PRs.Session persistence fixes
saveDbSnapshotnow runsPRAGMA wal_checkpoint(TRUNCATE)via bun:sqlite before reading kilo.db, ensuring recent writes aren't lost.startAgentandupdateAgentModelcallsession.list()to resume existing sessions for the mayor. Non-mayor agents always get fresh sessions.conversationHistoryfrom all mayor dispatch paths — kilo.db persistence handles session continuity.Container lifecycle fixes
syncRegistry()that POSTs running agents toTownContainerDOon agent start/exit/stop/failure. Registry cleared at end ofdrainAll().idleTimersnow store{ timer, onExit }.drainAll()Phase 1b replaces long idle timers (120s/600s) with 10s timers so already-idle agents exit promptly./db-snapshotto thekiloAuthMiddlewareskip condition alongside/container-registry.@kilocode/sdkand@kilocode/pluginbumped from 7.0.37 to 7.1.23.Reconciler fixes
code_review=falsefast-track: Only fast-tracks MR beads that have apr_url. MR beads without a PR stay open for the refinery (or get failed if code review is disabled and no PR was created).refineryNeededFilter— whencode_review=false, only dispatches for convoy review-and-merge beads. Prevents refinery dispatch for ordinary PR beads that should be handled bypoll_pr.bead_dependencies→convoy_metadata.merge_modelookups.pr_urlwhencode_review=falseare failed, and the source bead is reopened for retry.dispatchAgentNULLslast_event_at/last_event_typeso GUPP falls back tolast_activity_atuntil new SDK events arrive.escalateToActiveCadence()on work-creation paths (slingBead, slingConvoy, startConvoy, submitToReviewQueue, requestChanges). Only shortens alarms, never pushes back.Review queue fixes
gt:pr-feedbackbeads now close directly inagentDone(matchinggt:reworkandgt:pr-fixuppatterns), preventing redundant MR bead creation.Documentation
local-debug-testing.md.Verification
Visual Changes
Small model added to settings
Reviewer Notes
AGENT_DB_SNAPSHOTS_KVbinding requires a real KV namespace ID (set in both top-level and dev env inwrangler.jsonc).AIbinding.code_review=falsereconciler logic has been significantly reworked — the fast-track, Rules 5-6, and orphan cleanup all interact carefully. SeereconcileReviewQueueinreconciler.ts.session.list()— if the SDK changes how sessions are stored, this may need updating.