feat(gastown): auto-resolve merge conflicts on PRs (#2427)#2484
Merged
jrf0110 merged 5 commits intogastown-stagingfrom Apr 18, 2026
Merged
Conversation
…d UI (#2473) - Add auto_resolve_merge_conflicts to TownConfigSchema refinery sub-object (default: true) - Add auto_resolve_merge_conflicts to RigOverrideConfigSchema - Add auto_resolve_merge_conflicts to TownConfigUpdateSchema - Wire auto_resolve_merge_conflicts into EffectiveConfig and resolveRigConfig() - Wire into updateTownConfig() refinery merge path - Add toggle to town settings Refinery section (TownSettingsPageClient.tsx) - Add toggle to rig settings Refinery section (RigSettingsPageClient.tsx) with inherit-from-town pattern Co-authored-by: John Fawcett <john@kilcoode.ai>
…ire conflict detection (#2474) * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection - Add mergeable and mergeable_state fields to GitHubPRStatusSchema - Update checkPRStatus return type to PRStatusResult { status, mergeable_state } - Update poll_pr action to detect dirty mergeable_state and emit pr_conflict_detected exactly once per conflict episode (idempotent via has_conflicts bead metadata flag) - Clear has_conflicts flag when mergeable_state transitions to clean/unknown - Add applyEvent('pr_conflict_detected') handler in reconciler.ts that creates gt:pr-conflict issue beads (or escalation beads when auto_resolve_merge_conflicts=false) - Handle gt:pr-conflict beads in review-queue.ts agentDone path (close directly, skip review) - Add pr_conflict_detected to TownEventType enum * fix(gastown): always emit pr_conflict_detected and reset auto-merge timer on dirty PRs - Remove the wantsAutoResolveConflicts guard so pr_conflict_detected is emitted regardless of config; the reconciler already branches on that setting to create either a conflict bead (auto-resolve on) or an escalation bead (auto-resolve off). - Return early from the dirty branch after resetting auto_merge_ready_since to NULL, preventing a dirty PR from keeping or completing its auto-merge grace period. --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
…e conflict+feedback into single agent dispatch (#2477) * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback beads - Add pr_conflict_context field to PrimeContext type with pr_url, branch, target_branch, and has_feedback fields - Populate pr_conflict_context in prime() for gt:pr-conflict beads, and also for gt:pr-feedback beads that have accumulated has_conflicts metadata - Add PR Conflict Resolution Workflow section to polecat system prompt so agents know to fetch, rebase, resolve, force-push, and call gt_done - In reconciler pr_conflict_detected: before creating a new gt:pr-conflict bead, check if an open gt:pr-feedback bead already exists for the same MR — if so, merge has_conflicts into its metadata instead of creating a separate bead - In reconciler pr_feedback_detected: before creating a new gt:pr-feedback bead, check if an open gt:pr-conflict bead already exists for the same MR — if so, merge has_feedback into its metadata instead - Refactor hasExistingPrConflictBead / hasExistingPrFeedbackBead to delegate to new getExisting* helpers that return the bead_id * fix: handle SQLite integer 1 for has_feedback and extend conflict workflow to pr-feedback beads - agents.ts: check has_feedback === 1 (SQLite integer) in addition to === true so consolidated conflict+feedback beads correctly surface the has_feedback flag - polecat-system.prompt.ts: conflict resolution workflow now triggers for gt:pr-feedback beads that have pr_conflict_context, not only gt:pr-conflict beads --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
Contributor
Code Review SummaryStatus: No Issues Found | Recommendation: Merge Files Reviewed (4 files)
Reviewed by gpt-5.4-20260305 · 1,498,552 tokens |
added 2 commits
April 17, 2026 20:46
…s, guard unknown mergeable_state, add branch to gt_done prompt
- In pr_conflict_detected handler, use resolveRigConfig(townConfig, rig.config)
to get the effective config so town-level auto_resolve_merge_conflicts is
respected even when the rig has no override. Pass townConfig from Town.do.ts
into applyEvent and fetch it once before Phase 0 to share with Phase 1.
- In poll_pr handler, guard against mergeable_state === 'unknown' by returning
early (GitHub is still computing). Only clear has_conflicts when state is
definitively clean ('clean', 'blocked', or 'has_hooks'). Only emit
pr_conflict_detected when state is definitively 'dirty'.
- In buildConflictResolutionPrompt, include the branch argument in the gt_done
instruction so polecats don't fail the required-field validation.
kilo-code-bot Bot
pushed a commit
that referenced
this pull request
Apr 20, 2026
* feat(gastown): add auto_resolve_merge_conflicts setting to schemas and UI (#2473) - Add auto_resolve_merge_conflicts to TownConfigSchema refinery sub-object (default: true) - Add auto_resolve_merge_conflicts to RigOverrideConfigSchema - Add auto_resolve_merge_conflicts to TownConfigUpdateSchema - Wire auto_resolve_merge_conflicts into EffectiveConfig and resolveRigConfig() - Wire into updateTownConfig() refinery merge path - Add toggle to town settings Refinery section (TownSettingsPageClient.tsx) - Add toggle to rig settings Refinery section (RigSettingsPageClient.tsx) with inherit-from-town pattern Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection (#2474) * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection - Add mergeable and mergeable_state fields to GitHubPRStatusSchema - Update checkPRStatus return type to PRStatusResult { status, mergeable_state } - Update poll_pr action to detect dirty mergeable_state and emit pr_conflict_detected exactly once per conflict episode (idempotent via has_conflicts bead metadata flag) - Clear has_conflicts flag when mergeable_state transitions to clean/unknown - Add applyEvent('pr_conflict_detected') handler in reconciler.ts that creates gt:pr-conflict issue beads (or escalation beads when auto_resolve_merge_conflicts=false) - Handle gt:pr-conflict beads in review-queue.ts agentDone path (close directly, skip review) - Add pr_conflict_detected to TownEventType enum * fix(gastown): always emit pr_conflict_detected and reset auto-merge timer on dirty PRs - Remove the wantsAutoResolveConflicts guard so pr_conflict_detected is emitted regardless of config; the reconciler already branches on that setting to create either a conflict bead (auto-resolve on) or an escalation bead (auto-resolve off). - Return early from the dirty branch after resetting auto_merge_ready_since to NULL, preventing a dirty PR from keeping or completing its auto-merge grace period. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback into single agent dispatch (#2477) * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback beads - Add pr_conflict_context field to PrimeContext type with pr_url, branch, target_branch, and has_feedback fields - Populate pr_conflict_context in prime() for gt:pr-conflict beads, and also for gt:pr-feedback beads that have accumulated has_conflicts metadata - Add PR Conflict Resolution Workflow section to polecat system prompt so agents know to fetch, rebase, resolve, force-push, and call gt_done - In reconciler pr_conflict_detected: before creating a new gt:pr-conflict bead, check if an open gt:pr-feedback bead already exists for the same MR — if so, merge has_conflicts into its metadata instead of creating a separate bead - In reconciler pr_feedback_detected: before creating a new gt:pr-feedback bead, check if an open gt:pr-conflict bead already exists for the same MR — if so, merge has_feedback into its metadata instead - Refactor hasExistingPrConflictBead / hasExistingPrFeedbackBead to delegate to new getExisting* helpers that return the bead_id * fix: handle SQLite integer 1 for has_feedback and extend conflict workflow to pr-feedback beads - agents.ts: check has_feedback === 1 (SQLite integer) in addition to === true so consolidated conflict+feedback beads correctly surface the has_feedback flag - polecat-system.prompt.ts: conflict resolution workflow now triggers for gt:pr-feedback beads that have pr_conflict_context, not only gt:pr-conflict beads --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(reconciler): use resolveRigConfig for auto_resolve_merge_conflicts, guard unknown mergeable_state, add branch to gt_done prompt - In pr_conflict_detected handler, use resolveRigConfig(townConfig, rig.config) to get the effective config so town-level auto_resolve_merge_conflicts is respected even when the rig has no override. Pass townConfig from Town.do.ts into applyEvent and fetch it once before Phase 0 to share with Phase 1. - In poll_pr handler, guard against mergeable_state === 'unknown' by returning early (GitHub is still computing). Only clear has_conflicts when state is definitively clean ('clean', 'blocked', or 'has_hooks'). Only emit pr_conflict_detected when state is definitively 'dirty'. - In buildConflictResolutionPrompt, include the branch argument in the gt_done instruction so polecats don't fail the required-field validation. * fix(prompts): include branch arg in gt_done instruction for PR conflict workflow --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
jrf0110
added a commit
that referenced
this pull request
Apr 20, 2026
* feat(gastown): add auto_resolve_merge_conflicts setting to schemas and UI (#2473) - Add auto_resolve_merge_conflicts to TownConfigSchema refinery sub-object (default: true) - Add auto_resolve_merge_conflicts to RigOverrideConfigSchema - Add auto_resolve_merge_conflicts to TownConfigUpdateSchema - Wire auto_resolve_merge_conflicts into EffectiveConfig and resolveRigConfig() - Wire into updateTownConfig() refinery merge path - Add toggle to town settings Refinery section (TownSettingsPageClient.tsx) - Add toggle to rig settings Refinery section (RigSettingsPageClient.tsx) with inherit-from-town pattern Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection (#2474) * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection - Add mergeable and mergeable_state fields to GitHubPRStatusSchema - Update checkPRStatus return type to PRStatusResult { status, mergeable_state } - Update poll_pr action to detect dirty mergeable_state and emit pr_conflict_detected exactly once per conflict episode (idempotent via has_conflicts bead metadata flag) - Clear has_conflicts flag when mergeable_state transitions to clean/unknown - Add applyEvent('pr_conflict_detected') handler in reconciler.ts that creates gt:pr-conflict issue beads (or escalation beads when auto_resolve_merge_conflicts=false) - Handle gt:pr-conflict beads in review-queue.ts agentDone path (close directly, skip review) - Add pr_conflict_detected to TownEventType enum * fix(gastown): always emit pr_conflict_detected and reset auto-merge timer on dirty PRs - Remove the wantsAutoResolveConflicts guard so pr_conflict_detected is emitted regardless of config; the reconciler already branches on that setting to create either a conflict bead (auto-resolve on) or an escalation bead (auto-resolve off). - Return early from the dirty branch after resetting auto_merge_ready_since to NULL, preventing a dirty PR from keeping or completing its auto-merge grace period. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback into single agent dispatch (#2477) * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback beads - Add pr_conflict_context field to PrimeContext type with pr_url, branch, target_branch, and has_feedback fields - Populate pr_conflict_context in prime() for gt:pr-conflict beads, and also for gt:pr-feedback beads that have accumulated has_conflicts metadata - Add PR Conflict Resolution Workflow section to polecat system prompt so agents know to fetch, rebase, resolve, force-push, and call gt_done - In reconciler pr_conflict_detected: before creating a new gt:pr-conflict bead, check if an open gt:pr-feedback bead already exists for the same MR — if so, merge has_conflicts into its metadata instead of creating a separate bead - In reconciler pr_feedback_detected: before creating a new gt:pr-feedback bead, check if an open gt:pr-conflict bead already exists for the same MR — if so, merge has_feedback into its metadata instead - Refactor hasExistingPrConflictBead / hasExistingPrFeedbackBead to delegate to new getExisting* helpers that return the bead_id * fix: handle SQLite integer 1 for has_feedback and extend conflict workflow to pr-feedback beads - agents.ts: check has_feedback === 1 (SQLite integer) in addition to === true so consolidated conflict+feedback beads correctly surface the has_feedback flag - polecat-system.prompt.ts: conflict resolution workflow now triggers for gt:pr-feedback beads that have pr_conflict_context, not only gt:pr-conflict beads --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(reconciler): use resolveRigConfig for auto_resolve_merge_conflicts, guard unknown mergeable_state, add branch to gt_done prompt - In pr_conflict_detected handler, use resolveRigConfig(townConfig, rig.config) to get the effective config so town-level auto_resolve_merge_conflicts is respected even when the rig has no override. Pass townConfig from Town.do.ts into applyEvent and fetch it once before Phase 0 to share with Phase 1. - In poll_pr handler, guard against mergeable_state === 'unknown' by returning early (GitHub is still computing). Only clear has_conflicts when state is definitively clean ('clean', 'blocked', or 'has_hooks'). Only emit pr_conflict_detected when state is definitively 'dirty'. - In buildConflictResolutionPrompt, include the branch argument in the gt_done instruction so polecats don't fail the required-field validation. * fix(prompts): include branch arg in gt_done instruction for PR conflict workflow --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
jrf0110
added a commit
that referenced
this pull request
Apr 23, 2026
* feat(gastown): add auto_resolve_merge_conflicts setting to schemas and UI (#2473) - Add auto_resolve_merge_conflicts to TownConfigSchema refinery sub-object (default: true) - Add auto_resolve_merge_conflicts to RigOverrideConfigSchema - Add auto_resolve_merge_conflicts to TownConfigUpdateSchema - Wire auto_resolve_merge_conflicts into EffectiveConfig and resolveRigConfig() - Wire into updateTownConfig() refinery merge path - Add toggle to town settings Refinery section (TownSettingsPageClient.tsx) - Add toggle to rig settings Refinery section (RigSettingsPageClient.tsx) with inherit-from-town pattern Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection (#2474) * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection - Add mergeable and mergeable_state fields to GitHubPRStatusSchema - Update checkPRStatus return type to PRStatusResult { status, mergeable_state } - Update poll_pr action to detect dirty mergeable_state and emit pr_conflict_detected exactly once per conflict episode (idempotent via has_conflicts bead metadata flag) - Clear has_conflicts flag when mergeable_state transitions to clean/unknown - Add applyEvent('pr_conflict_detected') handler in reconciler.ts that creates gt:pr-conflict issue beads (or escalation beads when auto_resolve_merge_conflicts=false) - Handle gt:pr-conflict beads in review-queue.ts agentDone path (close directly, skip review) - Add pr_conflict_detected to TownEventType enum * fix(gastown): always emit pr_conflict_detected and reset auto-merge timer on dirty PRs - Remove the wantsAutoResolveConflicts guard so pr_conflict_detected is emitted regardless of config; the reconciler already branches on that setting to create either a conflict bead (auto-resolve on) or an escalation bead (auto-resolve off). - Return early from the dirty branch after resetting auto_merge_ready_since to NULL, preventing a dirty PR from keeping or completing its auto-merge grace period. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback into single agent dispatch (#2477) * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback beads - Add pr_conflict_context field to PrimeContext type with pr_url, branch, target_branch, and has_feedback fields - Populate pr_conflict_context in prime() for gt:pr-conflict beads, and also for gt:pr-feedback beads that have accumulated has_conflicts metadata - Add PR Conflict Resolution Workflow section to polecat system prompt so agents know to fetch, rebase, resolve, force-push, and call gt_done - In reconciler pr_conflict_detected: before creating a new gt:pr-conflict bead, check if an open gt:pr-feedback bead already exists for the same MR — if so, merge has_conflicts into its metadata instead of creating a separate bead - In reconciler pr_feedback_detected: before creating a new gt:pr-feedback bead, check if an open gt:pr-conflict bead already exists for the same MR — if so, merge has_feedback into its metadata instead - Refactor hasExistingPrConflictBead / hasExistingPrFeedbackBead to delegate to new getExisting* helpers that return the bead_id * fix: handle SQLite integer 1 for has_feedback and extend conflict workflow to pr-feedback beads - agents.ts: check has_feedback === 1 (SQLite integer) in addition to === true so consolidated conflict+feedback beads correctly surface the has_feedback flag - polecat-system.prompt.ts: conflict resolution workflow now triggers for gt:pr-feedback beads that have pr_conflict_context, not only gt:pr-conflict beads --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(reconciler): use resolveRigConfig for auto_resolve_merge_conflicts, guard unknown mergeable_state, add branch to gt_done prompt - In pr_conflict_detected handler, use resolveRigConfig(townConfig, rig.config) to get the effective config so town-level auto_resolve_merge_conflicts is respected even when the rig has no override. Pass townConfig from Town.do.ts into applyEvent and fetch it once before Phase 0 to share with Phase 1. - In poll_pr handler, guard against mergeable_state === 'unknown' by returning early (GitHub is still computing). Only clear has_conflicts when state is definitively clean ('clean', 'blocked', or 'has_hooks'). Only emit pr_conflict_detected when state is definitively 'dirty'. - In buildConflictResolutionPrompt, include the branch argument in the gt_done instruction so polecats don't fail the required-field validation. * fix(prompts): include branch arg in gt_done instruction for PR conflict workflow --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
jrf0110
added a commit
that referenced
this pull request
Apr 23, 2026
* feat(gastown): add auto_resolve_merge_conflicts setting to schemas and UI (#2473) - Add auto_resolve_merge_conflicts to TownConfigSchema refinery sub-object (default: true) - Add auto_resolve_merge_conflicts to RigOverrideConfigSchema - Add auto_resolve_merge_conflicts to TownConfigUpdateSchema - Wire auto_resolve_merge_conflicts into EffectiveConfig and resolveRigConfig() - Wire into updateTownConfig() refinery merge path - Add toggle to town settings Refinery section (TownSettingsPageClient.tsx) - Add toggle to rig settings Refinery section (RigSettingsPageClient.tsx) with inherit-from-town pattern Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection (#2474) * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection - Add mergeable and mergeable_state fields to GitHubPRStatusSchema - Update checkPRStatus return type to PRStatusResult { status, mergeable_state } - Update poll_pr action to detect dirty mergeable_state and emit pr_conflict_detected exactly once per conflict episode (idempotent via has_conflicts bead metadata flag) - Clear has_conflicts flag when mergeable_state transitions to clean/unknown - Add applyEvent('pr_conflict_detected') handler in reconciler.ts that creates gt:pr-conflict issue beads (or escalation beads when auto_resolve_merge_conflicts=false) - Handle gt:pr-conflict beads in review-queue.ts agentDone path (close directly, skip review) - Add pr_conflict_detected to TownEventType enum * fix(gastown): always emit pr_conflict_detected and reset auto-merge timer on dirty PRs - Remove the wantsAutoResolveConflicts guard so pr_conflict_detected is emitted regardless of config; the reconciler already branches on that setting to create either a conflict bead (auto-resolve on) or an escalation bead (auto-resolve off). - Return early from the dirty branch after resetting auto_merge_ready_since to NULL, preventing a dirty PR from keeping or completing its auto-merge grace period. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback into single agent dispatch (#2477) * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback beads - Add pr_conflict_context field to PrimeContext type with pr_url, branch, target_branch, and has_feedback fields - Populate pr_conflict_context in prime() for gt:pr-conflict beads, and also for gt:pr-feedback beads that have accumulated has_conflicts metadata - Add PR Conflict Resolution Workflow section to polecat system prompt so agents know to fetch, rebase, resolve, force-push, and call gt_done - In reconciler pr_conflict_detected: before creating a new gt:pr-conflict bead, check if an open gt:pr-feedback bead already exists for the same MR — if so, merge has_conflicts into its metadata instead of creating a separate bead - In reconciler pr_feedback_detected: before creating a new gt:pr-feedback bead, check if an open gt:pr-conflict bead already exists for the same MR — if so, merge has_feedback into its metadata instead - Refactor hasExistingPrConflictBead / hasExistingPrFeedbackBead to delegate to new getExisting* helpers that return the bead_id * fix: handle SQLite integer 1 for has_feedback and extend conflict workflow to pr-feedback beads - agents.ts: check has_feedback === 1 (SQLite integer) in addition to === true so consolidated conflict+feedback beads correctly surface the has_feedback flag - polecat-system.prompt.ts: conflict resolution workflow now triggers for gt:pr-feedback beads that have pr_conflict_context, not only gt:pr-conflict beads --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(reconciler): use resolveRigConfig for auto_resolve_merge_conflicts, guard unknown mergeable_state, add branch to gt_done prompt - In pr_conflict_detected handler, use resolveRigConfig(townConfig, rig.config) to get the effective config so town-level auto_resolve_merge_conflicts is respected even when the rig has no override. Pass townConfig from Town.do.ts into applyEvent and fetch it once before Phase 0 to share with Phase 1. - In poll_pr handler, guard against mergeable_state === 'unknown' by returning early (GitHub is still computing). Only clear has_conflicts when state is definitively clean ('clean', 'blocked', or 'has_hooks'). Only emit pr_conflict_detected when state is definitively 'dirty'. - In buildConflictResolutionPrompt, include the branch argument in the gt_done instruction so polecats don't fail the required-field validation. * fix(prompts): include branch arg in gt_done instruction for PR conflict workflow --------- Co-authored-by: John Fawcett <john@kilcoode.ai>
jrf0110
added a commit
that referenced
this pull request
Apr 23, 2026
…e fix, landing MR loop fix, env var propagation, container tooling, triage dispatch, dead code cleanup, UI polish (#2374) * feat(merges): add dismiss actions for failed beads on Merge Queue page (#2295) * feat(merges): add dismiss actions for failed beads on Merge Queue page - Add individual Dismiss (X) button to each failed bead row in AttentionItemRow - Add bulk 'Dismiss all failed (N)' button to NeedsAttention header area - Fix view button fallback: open MR bead when sourceBead is null (orphaned beads) - Both individual and bulk dismiss call updateBead with status: 'closed' on the MR bead - Dismiss all shows loading spinner and toast on completion/error * fix(merges): use rigId directly in openDrawer to fix TS typecheck --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add town ID copy badge and Debug settings section (#2296) * feat(gastown): add town ID copy badge and Debug settings section * fix(gastown): sanitize debug payload — strip git_url credentials and replace git_author_name with presence flag --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * chore(gastown): remove dead popReviewQueue and update stale comments (#2318) * chore(gastown): remove dead popReviewQueue and update stale comments Remove popReviewQueue() from review-queue.ts and its wrapper from Town.do.ts — confirmed no callers in the tRPC router, reconciler, or anywhere else. Also remove the Town.do.ts completeReview() wrapper (also had no external callers) and update stale comments across review-queue.ts, Town.do.ts, reconciler.ts, beads.ts, and container-dispatch.ts that referenced old patrol/scheduling functions (feedStrandedConvoys, rehookOrphanedBeads, schedulePendingWork, recoverStuckReviews, witnessPatrol, processReviewQueue) to reflect the current reconciler-based architecture. Closes #1403 * ci: retrigger Kilo Code Review (previous run failed due to transient clone error) * test(gastown): update integration tests to remove removed popReviewQueue/completeReview APIs popReviewQueue() and completeReview() were removed from TownDO as dead code. Update integration tests to use listBeads({ type: 'merge_request' }) instead of popReviewQueue() to observe MR bead state, and remove the regression guard test for completeReview which is no longer testable via the TownDO public API. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(gastown): prevent triage batch bead dispatch loop with wrong system prompt (#2321) * fix(gastown): prevent triage batch bead dispatch loop with wrong system prompt Option A: Mark triage batch bead as in_progress immediately after hookBead() in maybeDispatchTriageAgent(), before awaiting startAgentInContainer(). This prevents reconciler Rule 2 (idle agent + open hooked bead → dispatch_agent) from re-dispatching the triage bead with the generic polecat prompt on the next tick when container start fails. Rule 3 (stale in_progress, 5-min timeout) resets it to open for a clean retry via maybeDispatchTriageAgent. Option B (defense-in-depth): In applyActionCtx.dispatchAgent, detect triage batch beads (gt:triage label + created_by='patrol') and inject the triage system prompt, ensuring the polecat gets the correct tools even if Rule 2 somehow fires against an open triage batch bead. Fixes #1958 * fix(gastown): set rig_id on triage batch bead so reconciler Rule 1 can re-dispatch after timeout Without rig_id, when Rule 3 resets an abandoned in_progress triage batch bead to 'open', Rule 1 skips it (requires rig_id IS NOT NULL). This left the bead permanently 'open', blocking maybeDispatchTriageAgent from creating a replacement. Setting rig_id ensures Rule 1 can re-dispatch the existing bead (with triage system prompt injected via Option B). --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add cmake and pkg-config to container images (#2060) Add remaining build-essentials packages (cmake, pkg-config) to both prod and dev Dockerfiles. build-essential and libssl-dev were already present. * feat(gastown): add Java JDK to container images (#2066) Install default-jdk (OpenJDK) in both prod and dev Dockerfiles to support Java project builds and runtime. * fix(gastown): propagate custom env_vars to running containers on settings save (#2366) * fix(gastown): propagate custom env_vars to running containers on settings save Three gaps fixed: 1. syncTownConfigToProcessEnv() now applies custom env_vars from town config to process.env, with tracking of previously-applied keys so removed vars are deleted from process.env. 2. syncConfigToContainer() now persists custom env_vars to TownContainerDO storage (via container.setEnvVar/deleteEnvVar) so they survive container restarts. Previously-persisted custom keys are tracked in DO storage and cleaned up on removal. 3. updateAgentModel() hot-swap now overlays fresh custom env_vars from getCurrentTownConfig() over the stale startupEnv snapshot. Infra keys in LIVE_ENV_KEYS always take precedence. * fix(gastown): guard custom env_vars against reserved key override - control-server: export getLastAppliedEnvVarKeys() for process-manager - process-manager: delete stale custom keys from hotSwapEnv on hot-swap - Town.do: skip RESERVED_ENV_KEYS when setting custom env_vars on container Addresses 3 review warnings about custom env_vars overriding infra keys. * fix: skip reserved env keys in prevCustomKeys cleanup loop prevCustomKeys may contain reserved keys persisted by the previous implementation (before the RESERVED_ENV_KEYS filter was added). Without this guard the cleanup loop would delete managed infra values like KILOCODE_TOKEN that were just written by envMapping. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * chore(gastown): remove dead code from patrol/scheduling/review-queue (#1403) (#2339) chore(gastown): remove dead GUPP_WARN_MS export and update stale patrol/queue comments - Remove unused GUPP_WARN_MS constant export from patrol.ts (never referenced outside the file) - Update completion-reporter.ts JSDoc: replace stale witnessPatrol/schedulePendingWork references with reconciler-based description - Update control-server.ts comments: replace stale processReviewQueue/recoverStuckReviews references with current TownDO terminology Part of issue #1403 dead code cleanup. Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(gastown): break create_landing_mr infinite loop (#2260) (#2371) * fix(gastown): break create_landing_mr infinite loop (#2260) Add circuit breaker for landing MR creation to prevent runaway retry loops when convoys have no PR URLs. A town accumulated 5,335 failed actions over 41 hours before this fix. - Fix 1: Deduplicate MR bead creation — skip if an open/in_progress landing MR already exists for the convoy - Fix 2: Max 5 landing MR attempts with exponential cooldown (30s base, 30min cap), fail the convoy when exhausted - Fix 3: PR URL validation guard — skip landing MR creation when no tracked beads have a pr_url - Fix 4: Move convoy fail check before update_convoy_progress to prevent the race where progress updates are emitted for convoys about to be failed/closed Store landing_mr_attempts and last_landing_mr_attempt_at in the convoy bead's metadata JSON field (no schema migration needed). Add FailConvoy action type for explicit convoy failure. * fix(gastown): move max-attempts check after landing MR status lookup The max landing MR attempts guard was firing before checking whether the final landing MR was still active or already merged, making the last allowed attempt impossible to succeed. Now we check landing MR status first and only fail the convoy when no landing MR is active or merged. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(gastown): prevent deleteAgent from reopening terminal beads; bump max_instances to 800 deleteAgent() ran a blanket UPDATE SET status='open' on all beads assigned to the deleted agent, bypassing the terminal-state guard in updateBeadStatus(). On town boot, reconcileGC() deletes stale agents, which silently reopened closed/failed beads — causing wasted re-processing and token spend. Split into two queries: terminal beads only clear their assignee, non-terminal beads are reopened for re-dispatch as before. Also bumps container max_instances 700 → 800 and updates the image ref. * chore(gastown): bump max_instances to 810 * chore(gastown): update @kilocode/sdk and @kilocode/plugin to 7.2.7 RC Upgrade from 7.1.23 to 7.2.7 RC to address agent hanging issues. * chore: update pnpm-lock.yaml for @kilocode/sdk 7.2.7 * fix(gastown): revert pinned container image to local Dockerfile ref * chore(gastown): pin @kilocode/cli to 7.2.7 in container Dockerfiles The CLI was installed unpinned via npm install -g, so containers got whatever version was latest at image build time. Pin all CLI packages and the global @kilocode/plugin to 7.2.7 to match the SDK. * fix(gastown): exclude landing MR beads from orphan cleanup; allow failed convoy state The reconcileReviewQueue orphan cleanup query matched system-created landing MR beads (review-then-land convoys) because it only checked parent_bead_id for convoy membership, but landing MRs link via bead_dependencies. This caused landing MRs to be immediately failed with 'no pr_url' on every creation attempt, exhausting all 5 retries and failing the convoy. Three fixes: 1. Orphan cleanup: add created_by != 'system' filter so landing MR beads (always created_by='system') are excluded 2. Refinery dispatch: when code_review=false, also dispatch for system-created MR beads so the refinery picks up landing MRs 3. Invariant 5: add 'failed' to valid convoy states so FailConvoy doesn't trigger continuous invariant violations every 5s tick * chore(gastown): update @kilocode/sdk, plugin, and CLI to 7.2.14 * feat(gastown): auto-resolve merge conflicts on PRs (#2427) (#2484) * feat(gastown): add auto_resolve_merge_conflicts setting to schemas and UI (#2473) - Add auto_resolve_merge_conflicts to TownConfigSchema refinery sub-object (default: true) - Add auto_resolve_merge_conflicts to RigOverrideConfigSchema - Add auto_resolve_merge_conflicts to TownConfigUpdateSchema - Wire auto_resolve_merge_conflicts into EffectiveConfig and resolveRigConfig() - Wire into updateTownConfig() refinery merge path - Add toggle to town settings Refinery section (TownSettingsPageClient.tsx) - Add toggle to rig settings Refinery section (RigSettingsPageClient.tsx) with inherit-from-town pattern Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection (#2474) * feat(gastown): extend GitHubPRStatusSchema with mergeable_state and wire conflict detection - Add mergeable and mergeable_state fields to GitHubPRStatusSchema - Update checkPRStatus return type to PRStatusResult { status, mergeable_state } - Update poll_pr action to detect dirty mergeable_state and emit pr_conflict_detected exactly once per conflict episode (idempotent via has_conflicts bead metadata flag) - Clear has_conflicts flag when mergeable_state transitions to clean/unknown - Add applyEvent('pr_conflict_detected') handler in reconciler.ts that creates gt:pr-conflict issue beads (or escalation beads when auto_resolve_merge_conflicts=false) - Handle gt:pr-conflict beads in review-queue.ts agentDone path (close directly, skip review) - Add pr_conflict_detected to TownEventType enum * fix(gastown): always emit pr_conflict_detected and reset auto-merge timer on dirty PRs - Remove the wantsAutoResolveConflicts guard so pr_conflict_detected is emitted regardless of config; the reconciler already branches on that setting to create either a conflict bead (auto-resolve on) or an escalation bead (auto-resolve off). - Return early from the dirty branch after resetting auto_merge_ready_since to NULL, preventing a dirty PR from keeping or completing its auto-merge grace period. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback into single agent dispatch (#2477) * feat(gastown): add pr_conflict_context to PrimeContext and consolidate conflict+feedback beads - Add pr_conflict_context field to PrimeContext type with pr_url, branch, target_branch, and has_feedback fields - Populate pr_conflict_context in prime() for gt:pr-conflict beads, and also for gt:pr-feedback beads that have accumulated has_conflicts metadata - Add PR Conflict Resolution Workflow section to polecat system prompt so agents know to fetch, rebase, resolve, force-push, and call gt_done - In reconciler pr_conflict_detected: before creating a new gt:pr-conflict bead, check if an open gt:pr-feedback bead already exists for the same MR — if so, merge has_conflicts into its metadata instead of creating a separate bead - In reconciler pr_feedback_detected: before creating a new gt:pr-feedback bead, check if an open gt:pr-conflict bead already exists for the same MR — if so, merge has_feedback into its metadata instead - Refactor hasExistingPrConflictBead / hasExistingPrFeedbackBead to delegate to new getExisting* helpers that return the bead_id * fix: handle SQLite integer 1 for has_feedback and extend conflict workflow to pr-feedback beads - agents.ts: check has_feedback === 1 (SQLite integer) in addition to === true so consolidated conflict+feedback beads correctly surface the has_feedback flag - polecat-system.prompt.ts: conflict resolution workflow now triggers for gt:pr-feedback beads that have pr_conflict_context, not only gt:pr-conflict beads --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * fix(reconciler): use resolveRigConfig for auto_resolve_merge_conflicts, guard unknown mergeable_state, add branch to gt_done prompt - In pr_conflict_detected handler, use resolveRigConfig(townConfig, rig.config) to get the effective config so town-level auto_resolve_merge_conflicts is respected even when the rig has no override. Pass townConfig from Town.do.ts into applyEvent and fetch it once before Phase 0 to share with Phase 1. - In poll_pr handler, guard against mergeable_state === 'unknown' by returning early (GitHub is still computing). Only clear has_conflicts when state is definitively clean ('clean', 'blocked', or 'has_hooks'). Only emit pr_conflict_detected when state is definitively 'dirty'. - In buildConflictResolutionPrompt, include the branch argument in the gt_done instruction so polecats don't fail the required-field validation. * fix(prompts): include branch arg in gt_done instruction for PR conflict workflow --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add bulk bead deletion — array support for gt_bead_delete and bulk delete UI (#2572) * WIP: container eviction save * feat(gastown): add bulk delete and delete-by-status mayor API endpoints Add POST routes for bulk-delete and delete-by-status operations on beads, with corresponding handler functions that validate rig ownership and delegate to TownDO methods. * feat(gastown): add bulk bead deletion UI and bulk delete endpoints - Update deleteBead tRPC mutation to accept beadId: string | string[] - Add deleteBeadsByStatus tRPC mutation for bulk delete by status - Add adminBulkDeleteBeads and adminDeleteBeadsByStatus admin mutations - Add bulk delete methods to gastown container plugin client.ts - Update gt_bead_delete mayor tool to accept single ID or array - Add checkbox multi-select + bulk action bar to BeadsPageClient - Add 'Delete all failed (N)' button on Beads page - Add checkbox multi-select + bulk delete to admin BeadsTab - Add bulk delete and delete-by-status admin mutations to gastown-router.ts * fix(admin): pass typeFilter to deleteBeadsByStatus to respect active type filter --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * chore(gastown): fix prod container image ref to Dockerfile; update pnpm-lock * fix(gastown): address PR #2374 review comments - Fix 1: Skip pr_url guard for review-then-land convoys (direct-merge strategy never creates per-bead PRs, so landing MR should be created as long as all tracked beads are closed) - Fix 2: Add GASTOWN_TOWN_ID and GASTOWN_RIG_ID to RESERVED_ENV_KEYS in control-server.ts to prevent user env_vars from clobbering runtime routing vars used by pending-nudge routes and plugin clients - Fix 3: Filter RESERVED_ENV_KEYS when building hotSwapEnv in process-manager.ts; export RESERVED_ENV_KEYS from control-server so it can be shared with the process manager - Fix 4: Route disabled-auto-resolve merge conflicts through the full escalation pipeline (escalation_metadata row + triage request) so escalations appear in the UI and trigger automated follow-up - Fix 5: Import BeadType as BeadTypeType in Town.do.ts and use BeadStatus (already imported) instead of undefined BeadStatusType in the deleteBeadsByStatus method signature * fix: resolve type errors across gastown-staging branch - BeadsPageClient: loop individual deleteBead calls (no bulk endpoint) - router.d.ts: add auto_resolve_merge_conflicts to all 10 type positions - BeadsTab: explicit union narrowing for delete-all-failed action - polecat-system.prompt: escape backticks inside template literal - beads.ts/Town.do.ts: use sql.exec for dynamic IN queries (tsgo can't infer placeholder count from runtime-built strings) - mayor-tools.handler: fix parseJsonBody/resError call signatures * fix: formatting, lint errors, and bulk delete rig ID mismatch - Run pnpm format across all files flagged by CI - Remove unused ReviewQueueEntry import from Town.do.ts (dead code from popReviewQueue removal) - Fix String(value) lint error in process-manager.ts hot-swap (use typeof check) - Fix bulk delete to use each bead's own rigId from beadRigMap instead of first selected bead's rigId * fix(gastown): recover from stale kilo.db on session.create failure After a CLI version upgrade (e.g. 7.1.23 → 7.2.14), hydrated kilo.db snapshots from KV may have an incompatible schema. The new CLI's Drizzle migration fails (DROP INDEX on a changed schema), causing session.create to return an error instead of a session ID. Add createSessionWithStaleDbFallback() which: 1. Tries session.create normally 2. On failure, deletes the local kilo.db + WAL/SHM files 3. Tears down and restarts the SDK server (fresh schema) 4. Deletes the stale KV snapshot (fire-and-forget) so future boots don't re-hydrate the broken DB 5. Retries session.create on the fresh server Also adds DELETE /api/towns/:townId/rigs/:rigId/agents/:agentId/db-snapshot endpoint for KV snapshot cleanup. Cleared the stale KV snapshot for town 2bbebdd6's mayor via wrangler kv. * chore(gastown): adjust max_instances to 800 * feat(gastown): instrument container cold-start and mayor availability timing (#2699) * feat(gastown): instrument container cold-start and mayor availability timing Add timing instrumentation to measure: 1. Cloudflare cold-start latency: time from Town DO sending container.fetch() to container code running 2. Mayor availability window: time from container code starts to PTY responding Changes: - container-dispatch.ts: wrap container.fetch('/agents/start') with timing, emit container.agent_start_fetch AE event, change return type to include containerFetchMs - Town.do.ts: wrap health ping in ensureContainerReady() with timing, emit container.health_ping (ok/timeout) and container.ready_observed AE events with startedAt correlation; update three direct callers of startAgentInContainer to destructure new return type - scheduling.ts: thread containerFetchMs into agent.spawned AE event - analytics.util.ts: add timing fields to GastownEventData and writeEvent - process-manager.ts: add getStartTime() export, add agent.startup_phase logs for db_hydrated, sdk_ready, session_created phases and agent.startup_complete - agent-runner.ts: add agent.startup_phase log for git_done phase - control-server.ts: add startedAt to /health response, add agent.pty_connected log on PTY session creation/reuse - main.ts: fix hardcoded uptime:0 to use getUptime() - types.ts: add optional startedAt field to HealthResponse * refactor(gastown): consolidate timing to durationMs, remove wasSuccess in favor of error field - Replace separate containerFetchMs/healthPingMs/timeSinceContainerStartMs/uptimeMs fields in GastownEventData with the existing durationMs field (event name disambiguates the metric) - Remove wasSuccess boolean; failure is now inferred by presence of error field - Update all call sites in container-dispatch.ts, Town.do.ts, and scheduling.ts - Fix leftover bare 'return false' (type mismatch) in token check early exit * fix(gastown): fix health ping false-ok for non-2xx; drop healthPingStatus in favor of error - container.health_ping now emits error field for non-2xx HTTP responses, not just network exceptions/timeouts - Remove healthPingStatus string field; failure is fully inferred from presence of error (consistent with reviewer feedback to use existing fields) --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): add convoy membership editing — gt_bead_update depends_on, gt_convoy_add_bead, gt_convoy_remove_bead (#2700) * feat(gastown): add convoy membership editing and depends_on to gt_bead_update - setDependencies() in beads.ts: atomically replaces 'blocks' edges for a bead - convoyAddBead() / convoyRemoveBead() in beads.ts: insert/remove 'tracks' edges with total_beads counter sync and cross-edge cleanup on remove - Town.do.ts: updateBead accepts depends_on, exposes convoyAddBead/convoyRemoveBead RPC methods - router.ts: updateBead schema extended with depends_on; new convoyAddBead/convoyRemoveBead mutations - mayor-tools.handler.ts: BeadUpdateBody extended with depends_on; handleMayorConvoyAddBead/RemoveBead - gastown.worker.ts: registers POST .../convoys/:id/add-bead and .../remove-bead routes - container/plugin/client.ts: MayorGastownClient gets convoyAddBead/convoyRemoveBead - container/plugin/mayor-tools.ts: gt_bead_update gets depends_on arg; adds gt_convoy_add_bead and gt_convoy_remove_bead tools * fix(gastown): add ownership check and convoy validation to convoy membership mutations - Add verifyTownOwnership() to convoyAddBead and convoyRemoveBead tRPC mutations so only the town owner can mutate convoy membership - Add convoy existence check (via convoy_metadata row) in Town.do.ts convoyAddBead/convoyRemoveBead before any mutations to prevent invalid cross-bead relationships with non-convoy bead IDs --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(grafana): add container startup latency panels (p50/p90/p99) (#2701) * feat(grafana): add container startup latency panels (p50/p90/p99) Add new 'Container Startup Latency' row (panel 300) and 6 panels to the Gastown Grafana dashboard: - Panel 301: Health Ping Latency timeseries (p50/p90/p99 + timeout rate on right Y axis) - Panel 302: Agent Start Fetch Latency timeseries (p50/p90/p99 + failure rate on right Y axis) - Panel 303: Avg Health Ping stat (1h weighted average) - Panel 304: Avg Agent Start Fetch stat (1h weighted average) - Panel 305: Health Ping Timeout Rate over time - Panel 306: Agent Start Attempts stacked bar (success vs failure) Uses quantileWeighted() for percentile queries on container.health_ping and container.agent_start_fetch AE events added by the instrumentation bead. * fix(grafana): add timeFrom=1h to Avg Health Ping and Avg Agent Start Fetch stat panels Both panels had (1h) in their titles but used $timeFilter (full dashboard range). Adding timeFrom: '1h' enforces the one-hour window so the title is accurate. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): update town defaults, free-tier small_model, and custom model picker UI (#2725) * feat(gastown): update town defaults, free-tier small_model, and custom model picker UI - Update TownConfigSchema defaults: staged_convoys_default=true, merge_strategy='pr', refinery.review_mode='comments', auto_resolve_pr_feedback=true, auto_merge_delay_minutes=5 - Set small_model='kilo-auto/free' when free preset is selected during onboarding - Add defaultModel and smallModel fields to CustomModels type - Rework custom model picker: primary Default Model selector + collapsible per-role overrides accordion - Preset role pickers now read-only (disabled) when a named preset is active * fix(gastown): address review feedback — refinery defaults, custom model remount, and empty-model fallback * fix(gastown): use defaultModel as fallback for unset custom role overrides Empty per-role selectors (labeled 'Use default') now fall back to the selected defaultModel instead of hardcoded 'kilo-auto/balanced'. A user who picks Default Model = openai/gpt-4.1 but leaves the role overrides blank will get that model for all roles rather than kilo-auto/balanced. Update test expectations to match the corrected behaviour. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> * feat(gastown): measure true container cold-start and mayor-ready latency The prior container-startup panels measured DO→container RPC round-trips (/health and /agents/start), not actual cold-start time — /health is truncated at a 5s client timeout so p99 was bounded below the true cold-start budget, and the dashboard queries filtered on blob8/blob9 values that don't exist in the AE schema so the panels showed nothing. This replaces those with two metrics that answer the original questions: 1. container.cold_start — TownContainerDO.warmUp() invokes the Container class's startAndWaitForPorts() directly and times it. Emitted only when the container was actually started (state != healthy), so the quantiles reflect real cold starts without being capped by an arbitrary client-side timeout. 2. mayor.session_ready — container stamps mayorReadyAt when the first mayor agent transitions to 'running' and exposes it via /health. Town DO reads it and emits durationMs = mayorReadyAt - startedAt exactly once per container lifetime (deduped in DO storage keyed by containerStartedAt). Dashboard fixes: - Rename 'Container Startup Latency' row to 'DO → Container RPC Latency' and clarify panel titles so operators don't read p99 off those and think it's cold-start time. - Fix broken success/failure filters: blob8='ok' / blob9='true' → blob5='' (error absent), blob5!='' (error present). - Convert quantile queries from label-column style (which collapsed all three series to 'latency_ms') to column-name style (AS p50/p90/ p99), so the legend actually distinguishes the percentiles. - Add new 'Container Cold Start & Mayor Ready' row with p50/p90/p99 panels for the two new events. * chore: resolve stale rebase conflicts and fix types.ts overload error - Resolve unresolved merge-conflict markers left in the onboarding UI files from an earlier aborted rebase; take the version from #2725 (merged) in all three cases, since that matches the now-live UI. - Remove three stale src/app/... files that were moved to apps/web/... upstream but lingered as "deleted by us" conflicts. - Fix tsgo overload error in services/gastown/src/types.ts: the .default({}) on the refinery sub-schema wasn't satisfied by the empty-object literal because the inner shape has no required optional fields from the compiler's perspective. Spell out the defaults explicitly. * fix(gastown): don't retroactively apply #2725 town-config defaults to legacy towns Addresses kilo-code-bot's critical review on services/gastown/src/types.ts: TownConfigSchema is the parse schema for PERSISTED town config (loaded via getTownConfig on every access), so the default-value changes in #2725 were silently flipping existing-town behavior on every load. A town created before #2725 with no merge_strategy saved would start reporting merge_strategy='pr' / staged_convoys_default=true / refinery.auto_merge_ delay_minutes=5 / etc. the moment the new code deployed — retroactively enabling PR auto-merge on rigs the owner never opted in to. Fix: - Revert TownConfigSchema defaults to their pre-#2725 values so existing persisted configs keep their historical behavior on reload. - Move the new-style defaults into a single NEW_TOWN_CONFIG_DEFAULTS constant that getTownConfig() seeds exactly once, when a Town DO loads its config and finds nothing persisted (i.e. the town is fresh). The seeded values are written back to storage, so subsequent reads never rely on schema defaults. - Update pr-feedback.test.ts: the test that asserted parsing {} yielded auto_resolve_pr_feedback=true encoded the exact hazard we're fixing. Rewrite it to assert the conservative schema behavior (undefined / false / null) and add a new config.test.ts pair covering (a) fresh- town seeding path and (b) legacy-town preservation path. * fix(gastown): address PR #2374 review feedback — readiness key leak and cross-rig bead delete - Mayor readiness dedupe no longer leaks a durable-storage key per container lifetime. Replace the per-startedAt 'mayor:ready_reported_for:<startedAt>' key with a single 'mayor:ready_reported_for' key that holds the most recent startedAt and is overwritten on container restart. Legacy keys are cleaned up lazily on next DO init via a prefix list/delete. - Plug cross-rig bead-delete vectors in the gastown.deleteBead tRPC mutation and the mayor bulk-delete handler. deleteBead() and deleteBeads() on the Town DO now accept an optional rigId; the underlying SQL filters input IDs to those actually belonging to that rig, so authorizing one rig can no longer delete beads from a sibling rig in the same town. Admin-only paths keep the unfiltered behavior. --------- Co-authored-by: John Fawcett <john@kilcoode.ai> Co-authored-by: Breno Colom <breno@breno.org>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Closes #2427
GitHubPRStatusSchemato capturemergeable_statefrom the GitHub API; wirepr_conflict_detectedevent into thepoll_praction when a PR is detected asdirtyauto_resolve_merge_conflictssetting (defaulttrue) toTownConfigSchema,RigOverrideConfigSchema, and the town/rig settings UI alongside the existing auto-resolve PR feedback togglegt:pr-conflictpolecats with rebase context viaPrimeContext; consolidate conflict + review feedback into a single agent dispatch when both are present on the same PRConvoy
144e79d7— feat: Auto-resolve merge conflicts on PRs (#2427)PRs included
auto_resolve_merge_conflictssetting to schemas and UIGitHubPRStatusSchemawithmergeable_stateand wire conflict detectionpr_conflict_contexttoPrimeContextand consolidate conflict+feedback into single agent dispatch