Skip to content

fix: completionPrompt continues existing session instead of recursive answer()#475

Merged
buger merged 3 commits intomainfrom
fix/completion-prompt-session-continuity
Mar 4, 2026
Merged

fix: completionPrompt continues existing session instead of recursive answer()#475
buger merged 3 commits intomainfrom
fix/completion-prompt-session-continuity

Conversation

@buger
Copy link
Collaborator

@buger buger commented Mar 4, 2026

Summary

  • Bug: completionPrompt called this.answer() recursively, which created a fresh TaskManager and iteration counter, wiping all task context from the main turn. The follow-up turn had no memory of what Phase 1 did — it saw already-modified files, concluded "no changes needed", and returned an empty result that overwrote the successful structured output (pr_urls, files_changed, summary).
  • Fix: Replace the recursive this.answer() with a single streamText continuation — append the completion prompt as a user message to currentMessages and run one more AI pass with the same tools, TaskManager, and conversation history. Falls back to the original result if the follow-up produces nothing useful.

What changed

Before After
this.answer() — full new agentic loop streamText() — one more turn in same session
Fresh TaskManager — tasks wiped Same TaskManager — tasks preserved
Fresh iteration counter (up to max_iterations) Capped at 5 extra iterations
finalResult = completionResult unconditionally Keeps original if follow-up is empty

Test plan

  • All 2631 existing tests pass (106 suites)
  • completion-prompt unit tests pass (26 tests)
  • Manual test: run engineer workflow with completion_prompt configured — verify PR URLs are preserved in output

🤖 Generated with Claude Code

buger and others added 3 commits March 4, 2026 14:19
…sive answer()

Previously, completionPrompt spawned a recursive this.answer() call which created
a fresh TaskManager and iteration counter, losing all context from the main turn.
This caused structured outputs (pr_urls, files_changed) to be overwritten with
empty results from the follow-up turn that had no memory of Phase 1's work.

Now completionPrompt appends a user message to the existing conversation and runs
one more streamText pass with the same tools, TaskManager, and history — just one
more message in the same session rather than a new agent execution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tell the AI to respond with its previous answer as-is if everything
checks out, and only modify + re-respond with the full answer if
something actually needs fixing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tests verify:
- streamText called twice (not recursive answer()) when completionPrompt is set
- Original result preserved when completion prompt returns empty
- Completion prompt skipped when _completionPromptProcessed flag is set
- Original result preserved when completion prompt throws
- Updated result used when completion prompt calls attempt_completion again
- No completion prompt when none is configured
- Updated message format test for new footer text

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@buger buger merged commit 1f2e7f7 into main Mar 4, 2026
14 checks passed
@probelabs
Copy link
Contributor

probelabs bot commented Mar 4, 2026

I was unable to complete your request due to reaching the maximum number of tool iterations.


Powered by Visor from Probelabs

Last updated: 2026-03-04T15:05:43.336Z | Triggered by: pr_updated | Commit: 0a50d65

💡 TIP: You can chat with Visor using /visor ask <your question>

@probelabs
Copy link
Contributor

probelabs bot commented Mar 4, 2026

Architecture Issues (1)

Severity Location Issue
🟠 Error contract:0
Output schema validation failed: must have required property 'issues'
\n\n

Architecture Issues (1)

Severity Location Issue
🟠 Error contract:0
Output schema validation failed: must have required property 'issues'
\n\n ### Performance Issues (5)
Severity Location Issue
🟡 Warning npm/src/agent/ProbeAgent.js:3620-3622
The completion prompt follow-up mutates currentMessages by pushing the user message directly, then passes it to prepareMessagesWithImages which creates a shallow copy. This results in O(n) array copying where n is the total message count, happening twice (once in the main loop, once in completion prompt). For long conversations, this creates unnecessary memory pressure.
💡 SuggestionConsider batching the completion prompt message addition with the existing messages array construction to avoid multiple array copies. The current approach is acceptable for typical conversation lengths but could be optimized for very long sessions.
🟡 Warning npm/src/agent/ProbeAgent.js:3626-3627
The code resets completionResult = null and completionAttempted = false before the follow-up turn. While this is correct for the new logic, it means the completion tracking state is being mutated in-place. If the follow-up streamText call fails or times out, these reset values persist, though the code correctly falls back to originalResult.
💡 SuggestionThe current implementation is correct - the fallback to originalResult handles failure cases. Consider adding a telemetry event to track when the fallback is triggered for monitoring purposes.
🟡 Warning npm/src/agent/ProbeAgent.js:3630-3648
The completionStreamOptions object is constructed with similar properties to the main streamOptions (model, messages, tools, maxTokens, temperature, onStepFinish). This creates a partial duplication of the options construction logic. While not a runtime performance issue, it increases code maintenance burden.
💡 SuggestionConsider extracting common streamText options into a helper method or spreading from a base options object to reduce duplication. This is a minor code quality improvement, not a performance concern.
🟡 Warning npm/src/agent/ProbeAgent.js:3656-3661
The code iterates over cpResult.response?.messages and pushes each to currentMessages individually. For responses with many messages, this is O(m) where m is message count. While typically small, using spread operator would be more idiomatic: currentMessages.push(...cpMessages).
💡 SuggestionReplace the for-of loop with spread: if (cpMessages?.length) currentMessages.push(...cpMessages);
🟡 Warning npm/src/agent/ProbeAgent.js:3631
The completion prompt follow-up uses a hardcoded max of 5 iterations (completionMaxIterations = 5). This is a reasonable limit but is not configurable. For complex follow-up tasks, this might be insufficient, while for simple validations it may be excessive.
💡 SuggestionConsider making the completion prompt iteration limit configurable via a new option like completionPromptMaxIterations, defaulting to 5. This allows tuning for different use cases without code changes.

Quality Issues (1)

Severity Location Issue
🟠 Error contract:0
Output schema validation failed: must have required property 'issues'

Powered by Visor from Probelabs

Last updated: 2026-03-04T15:01:12.256Z | Triggered by: pr_updated | Commit: 0a50d65

💡 TIP: You can chat with Visor using /visor ask <your question>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant