feat(gastown): stop convoy landing-MR respawn loop when PR awaiting approval#3138
Merged
kilo-code-bot[bot] merged 2 commits intogastown-stagingfrom May 8, 2026
Merged
feat(gastown): stop convoy landing-MR respawn loop when PR awaiting approval#3138kilo-code-bot[bot] merged 2 commits intogastown-stagingfrom
kilo-code-bot[bot] merged 2 commits intogastown-stagingfrom
Conversation
…pproval Add GitHub reviewDecision and mergeStateStatus to PR feedback checks, gating auto-merge on awaitingApproval/changesRequested flags. Persist awaiting_approval in MR bead metadata, suppress Rule 4 timeout and Rule 6 refinery re-dispatch when set, and prevent convoy-level respawn of landing MRs for approval-blocked PRs. Fixes: 7ff83865
Contributor
Code Review SummaryStatus: No Issues Found | Recommendation: Merge Files Reviewed (1 file)
Resolved Findings
Reviewed by gpt-5.5-2026-04-23 · 369,456 tokens |
…f block Block-scoped variables were used outside their declaration scope, causing TypeScript compilation failure. Declared with let and default values before the try block, assigned inside the if block.
jrf0110
added a commit
that referenced
this pull request
May 9, 2026
* fix(gastown): point dev GIT_TOKEN_SERVICE binding at git-token-service-dev git-token-service's wrangler env.dev overrides the worker name to 'git-token-service-dev', but gastown's env.dev.services binding was still referencing the base 'git-token-service' name. Wrangler's local dev registry does exact-name matching, so the binding showed as [not connected] whenever both workers were running side by side. Every other consumer in the repo (cloud-agent-next, security-sync, security-auto-analysis) already uses 'git-token-service-dev' in their env.dev block; gastown was the outlier. * fix(gastown): push new model onto resumed mayor session on hot-swap (#2999) When a user changes the mayor's model in town settings, updateAgentModel restarts the SDK server with new KILO_CONFIG_CONTENT and resumes the existing session from kilo.db. Commit 9785570 intentionally stopped sending any session.prompt on resume to avoid duplicating the MAYOR_STARTUP_PROMPT, but that also dropped the model param — so the resumed session kept its prior per-session model until the user ran /model manually. Extract the fresh vs. resumed session-prompt logic into applyModelToSession and on resume send a noReply:true prompt carrying only the new model param. This updates the SDK server's per-session model without replaying the startup prompt. Errors on the resume path are swallowed so the hot-swap still succeeds; the SDK server fell back to the config-loaded model at startup, which was already updated. Add container tests covering both fresh and resumed paths. Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(gastown): stop reconciler log spam from orphaned bead_cancelled events (#3047) Two independent bugs compose to flood production logs every alarm tick with 'Bead <id> not found' errors: 1. deleteBead / deleteBeads did not clean up the town_events queue, leaving bead_cancelled and container_status rows pointing at deleted beads/agents. 2. applyEvent threw on missing beads and the drain loop never marked the failing event processed — so it retried forever. Fix 1: purge town_events rows (by bead_id OR agent_id, since agents are beads) from deleteBead and the deleteBeads bulk path. Fix 2a: reconciler.applyEvent('bead_cancelled') checks for the target bead up front and returns (with a warn) when it's missing, instead of throwing. Fix 2b: the Town.do.ts drain loop recognises 'Bead/Agent <uuid> not found' terminal errors, logs them at warn, and marks the offending event processed so it stops retrying. Adds debug RPCs (debugTownEvents, debugInsertTownEvent, debugRecordContainerStatus) and integration coverage in event-cleanup.test.ts. Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown-container): add crash visibility + per-agent start mutex (#3055) * feat(gastown-container): add crash visibility + per-agent start mutex Diagnostic changes to investigate frequent container restarts for town 4d82f099-ccb7-4eaf-8676-73562e0a27eb (~1.5–2 min boot-hydration loops). - main.ts: add unhandledRejection listener that logs full error/stack without exiting (Bun/Node silently drop rejections without a handler, making fire-and-forget failures like void saveDbSnapshot()/void subscribeToEvents() invisible). Include uptime and active-agent count for correlation. - main.ts: improve uncaughtException log with name/uptime/agent count. - main.ts: 30s periodic container.memory_usage log (rss/heap/external) so OOM-class failures (external SIGKILL from Cloudflare Containers runtime when the memory ceiling is hit) become observable — these leave no exception behind. - main.ts: wrap bootHydration() in try/catch so a rare synchronous throw before the first await doesn't crash the process. - process-manager.ts: add per-agentId mutex for startAgent. Production logs show two /agents/start requests for the same agentId logged at the same millisecond; both pass the re-entrancy check before either commits a 'starting' record, then race on startupAbortController, session creation, idle timers, and SDK sessionCount. Serialising per agentId makes the re-entrant path observe a consistent snapshot. - process-manager.test.ts: three tests for the mutex — same-id serialisation, different-id concurrency, lock release on throw. * fix(container): replace Promise.withResolvers with explicit new Promise Promise.withResolvers is a newer API not available on older Bun runtimes. Since process-manager.ts is imported during container startup, a missing global would throw before crash handlers are registered and prevent the control server from starting. Use the same explicit new Promise pattern as the existing sdkServerLock. * feat(gastown/container): include townId in crash and memory logs Per review feedback, attach the container's GASTOWN_TOWN_ID to unhandled_rejection, uncaught_exception, cold_start, memory_usage, and boot_hydration_failed log entries so production crash logs can be correlated with a specific town without needing to also have an agent registered. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * chore(gastown): fix format and lint CI failures (#3072) chore(gastown): fix format and lint CI failures on staging Co-authored-by: John Fawcett <john@kilcoode.ai> * chore(gastown): drop unused promise param from unhandledRejection handler (#3074) Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add proactive idle-container stop in TownDO alarm (#3113) * feat(gastown): add proactive idle-container stop in TownDO alarm When a town has no active work and the mayor has been idle for >5min, the alarm handler now calls container.stop() to force a graceful SIGTERM drain instead of waiting for Cloudflare's port-idle timer (which gets reset by PTY WebSocket keep-alives). This targets the root cause of 300+ active containers for ~100 active users. - Add stopContainerIfIdle() with dependency-injected logic in town/container-idle-stop.ts for testability - Wire into alarm handler just before the re-arm block - Emit container.idle_stop event with reason for observability - 2min throttle prevents thrash; failed stops are retried next tick - 13 unit tests covering all branches * fix(gastown): remove non-null assertions from container-idle-stop Replace mayor.last_activity_at! with null-safe checks using mayor.last_activity_at != null guards, consistent with coding style that forbids ! non-null assertions. * fix: allow stopping healthy containers in idle-stop guard The state guard only checked for 'running', but containers can also report 'healthy' as an active state (consistent with gastown.worker.ts). Added 'healthy' to the guard and a corresponding test. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(onboarding): redirect back to onboarding after GitHub app install (#3119) * feat(onboarding): redirect back to onboarding after GitHub app install * fix: address PR review feedback — stable effect deps, parsed error state, URIError guard - OnboardingStepRepo: use stable refetch reference and scalar param instead of full query object to prevent duplicate toasts/refetches - GitHub callback: parse owner token from state in error handler so |return= suffix doesn't leak into org redirect URLs - validate-return-path: catch URIError from malformed percent-encoding and treat as invalid return path (null) instead of throwing --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): speed up mayor session startup (#3122) * feat(gastown): prewarm mayor SDK server in bootHydration Add mayor SDK server prewarming to bootHydration so the mayor's kilo serve instance is already running when the user's first /agents/start arrives after a container restart. Previously, the mayor was only resumed if it was in the registry (running/starting at shutdown), but idle-stop and stream-error teardowns leave the mayor unregistered. - Export mayorWorkdirForTown() from agent-runner.ts - Add prewarmMayorSDK() to process-manager.ts that fetches the mayor agent ID from a new worker endpoint, hydrates kilo.db from KV snapshot, and starts the SDK server - Add GET /api/towns/:townId/mayor-id endpoint to gastown.worker.ts (uses authMiddleware like container-registry/db-snapshot) - Add getMayorAgentId() RPC method to Town.do.ts - Add warm-cache detection in startAgentImpl: log phaseMs: 0 and prewarmed: true when the SDK instance was already cached - bootHydration no longer returns early on empty registry so the mayor prewarm always runs * feat(gastown): seed getMayorStatus cache from ensureMayor response Instead of only invalidating the getMayorStatus query after ensureMayor succeeds (which forces a 3s polling wait before useXtermPty can start connecting), seed the React Query cache directly from the mutation result. The agentId and sessionStatus are already available in the ensureMayor response, so the terminal can begin connecting within ~50ms instead of waiting for the next poll tick. Still invalidate after seeding so the next poll catches up to authoritative state. * fix(gastown): detect torn-down SDK in _ensureMayor short-circuit When the container reports the mayor as 'running'/'starting' but the SDK instance has no serverPort or sessionId (torn down after stream errors or drain), _ensureMayor now falls through to a fresh dispatch instead of returning early. This eliminates the 'refresh fixes it' class of failures where the PTY gets a 404 because there's no SDK port to attach to. Also extend checkAgentContainerStatus to surface serverPort and sessionId from the container's agent status response. * feat(gastown): add AE telemetry events for mayor startup optimization Add three Analytics Engine event streams to measure the impact of the mayor startup optimizations: 1. agent.startup_phase — emitted for db_hydrated, sdk_ready, and session_created phases. Includes elapsedMs and phaseMs so we can P50/P95 per-town. The sdk_ready event includes phaseMs: 0 when the SDK was prewarmed (warm-cache hit). 2. mayor.prewarm_complete — emitted when the mayor SDK server is prewarmed during bootHydration, with durationMs. 3. mayor.ensure_decision — tracks the _ensureMayor decision tree: short_circuit_warm, short_circuit_idle, sdk_dead_redispatch, or fresh_dispatch. Measures the rate of the SDK-dead case that Change 3 fixes. Container-side events are proxied to AE via a new POST /api/towns/:townId/container-events worker endpoint, since the container can't call writeEvent directly. * test(gastown): add integration test for torn-down-SDK fall-through Test that _ensureMayor falls through when the container status doesn't indicate a live SDK (no serverPort or sessionId). Covers: 1. Container not available in test env (baseline behavior) 2. sdkAlive validation logic: zero port, empty session, valid values 3. checkAgentContainerStatus returns 404 for unknown agents * fix(gastown): set per-agent KILO_TEST_HOME/XDG_DATA_HOME in prewarm env The prewarm function was copying KILO_TEST_HOME and XDG_DATA_HOME from process.env, but those are typically absent at the container level. Normal agent startup sets them per-agent via buildAgentEnv(). Without these, the prewarmed SDK server boots against the default data directory and bypasses the hydrated kilo.db snapshot. Now buildPrewarmEnv() sets KILO_TEST_HOME and XDG_DATA_HOME based on the mayorAgentId, matching what buildAgentEnv() does for regular agents. * fix(gastown): generate KILO_CONFIG_CONTENT in prewarm env and handle config mismatch on cache hit Prewarm now generates KILO_CONFIG_CONTENT/OPENCODE_CONFIG_CONTENT using buildKiloConfigContent() with the kilocode token and default models instead of copying them from process.env (where they're absent on cold start). When ensureSDKServer() finds a cached instance whose config differs from the incoming env, it evicts the old server and creates a new one so the SDK picks up the correct model/provider config. Also extracts PERSIST_ENV_KEYS to module-level and updates process.env for those keys on cache hit when configs match. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): stop convoy landing-MR respawn loop when PR awaiting approval (#3138) * feat(gastown): stop convoy landing-MR respawn loop when PR awaiting approval Add GitHub reviewDecision and mergeStateStatus to PR feedback checks, gating auto-merge on awaitingApproval/changesRequested flags. Persist awaiting_approval in MR bead metadata, suppress Rule 4 timeout and Rule 6 refinery re-dispatch when set, and prevent convoy-level respawn of landing MRs for approval-blocked PRs. Fixes: 7ff83865 * fix(gastown): hoist reviewDecision/mergeStateStatus/isDraft outside if block Block-scoped variables were used outside their declaration scope, causing TypeScript compilation failure. Declared with let and default values before the try block, assigned inside the if block. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * chore(gastown): fix CI failures after staging merge from main * chore(gastown): fix CI failures after staging merge from main --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add GitHub
reviewDecisionandmergeStateStatusfields to PR feedback checks to detect when a PR is blocked by branch protection requiring human approval. This stops the convoy landing-MR respawn loop that creates up to 5 failedmerge_requestbeads against a single green PR that is simply waiting on a human reviewer.Six targeted changes in
services/gastown/src/dos/town:town-scm.ts) to fetchreviewDecision,mergeStateStatus,isDraftand surfaceawaitingApproval/changesRequestedonPRFeedbackCheckResultactions.ts) on!awaitingApproval && !changesRequested— prevents the "merge attempt → 405 → retry" loopawaiting_approvalon MR bead metadata so the reconciler can check the flag without re-issuing a GraphQL callreconciler.ts) — don't fail MR beads that are correctly waiting on human approvalawaiting_approval=1suppresscreate_landing_mrThree telemetry events added:
pr.awaiting_approval_detected,pr.awaiting_approval_resolved,reconciler.respawn_suppressed.Verification
test/integration/awaiting-approval.test.tscover: Rule 4 timeout suppression, Rule 6 re-dispatch suppression, convoy respawn suppression, approval clearing, REVIEW_REQUIRED vs CHANGES_REQUESTED distinctionVisual Changes
N/A
Reviewer Notes
The
merge_praction also gates onawaitingApprovalandchangesRequested(pre-merge fresh feedback check atactions.ts:1280-1287), so even if the auto-merge timer fires, the merge will be aborted if approval is still pending. The convoy respawn suppression (Change 6) is defensive — with Change 4 in place, MRs awaiting approval shouldn't reachfailedstatus in the first place.