feat: add AI-powered prompt enhancement#144
Conversation
| } | ||
| await Session.updateMessage(summaryUserMsg) | ||
| await Session.updatePart({ | ||
| id: Identifier.ascending("part"), | ||
| id: PartID.ascending(), | ||
| messageID: summaryUserMsg.id, | ||
| sessionID, |
There was a problem hiding this comment.
Bug: The removal of the compactionAttempts counter can lead to an infinite loop if session compaction repeatedly fails to reduce the context size sufficiently.
Severity: HIGH
Suggested Fix
Reintroduce the compactionAttempts counter and a MAX_COMPACTION_ATTEMPTS constant (e.g., set to 3). Before continuing the loop for another compaction, check if the attempt count has exceeded the maximum. If it has, break the loop and surface an error to the user.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: packages/opencode/src/session/prompt.ts#L548-L553
Potential issue: The code change removes a previously implemented safeguard that limited
consecutive session compaction attempts. Without the `compactionAttempts` counter and
`MAX_COMPACTION_ATTEMPTS` guard, a scenario can arise where a session's context
repeatedly overflows even after compaction. This will cause an infinite loop, consuming
server resources and API credits without terminating or notifying the user. While
`SessionCompaction.process()` can set an error, the calling code in `prompt.ts` does not
check for this error and unconditionally continues the loop, creating a resource
exhaustion risk.
f1addf4 to
49c28a9
Compare
Add AI-powered prompt enhancement that rewrites rough user prompts into clearer, more specific versions before sending to the main model. - Add `enhancePrompt()` utility using a small/cheap model to polish prompts - Register `prompt.enhance` TUI command with `<leader>i` keybind - Show "enhance" hint in the bottom bar alongside agents/commands - Add `prompt_enhance` keybind to config schema - Add unit tests for the `clean()` text sanitization function Inspired by KiloCode's prompt enhancement feature. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…d auto-enhance config - Rewrite system prompt based on AutoPrompter research (5 missing info categories: specifics, action plan, scope, verification, intent) - Add few-shot examples for data engineering tasks (dbt, SQL, migrations) - Add `experimental.auto_enhance_prompt` config flag (default: false) - Auto-enhance normal prompts on submit when enabled (skips shell/slash) - Export `isAutoEnhanceEnabled()` for config-driven behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add 15s timeout via `AbortController` to prevent indefinite hangs - Extract `ENHANCE_ID` constant and document synthetic `as any` casts - Fix `clean()` regex to match full-string code fences only (avoids stripping inner code blocks) - Export `stripThinkTags()` as separate utility for testability - Move auto-enhance before extmark expansion (prevents sending expanded paste content to the small model) - Add toast feedback and error logging for auto-enhance path - Update `store.prompt.input` after enhancement so history is accurate - Add outer try/catch with logging to `enhancePrompt()` - Expand tests from 9 to 30: `stripThinkTags()`, `clean()` edge cases, combined pipeline tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
49c28a9 to
ea70958
Compare
When the small model hits its token limit mid-generation, `<think>` tags may not have a closing `</think>`. The previous regex required a closing tag, which would leak the entire reasoning block into the enhanced prompt. Now `stripThinkTags()` matches both closed and unclosed think blocks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| if (enhanced !== inputText) { | ||
| inputText = enhanced | ||
| setStore("prompt", "input", enhanced) | ||
| } | ||
| } | ||
| } catch (err) { | ||
| // Enhancement failure should never block prompt submission | ||
| console.error("auto-enhance failed, using original prompt", err) | ||
| } | ||
| } | ||
| // altimate_change end |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
- Fix history storing original text instead of enhanced text by passing `inputText` explicitly to `history.append()` instead of spreading `store.prompt` which may contain stale state - Add concurrency guard (`enhancingInProgress` flag) to prevent multiple concurrent auto-enhance LLM calls from rapid submissions - Consolidate magic string into `ENHANCE_NAME` constant used across agent name, user agent, log service, and ID derivation - Add justifying comment for `as any` cast on synthetic IDs explaining why branded types are safely bypassed - Add `isAutoEnhanceEnabled()` tests (5 cases): config absent, present but missing flag, false, true, undefined - Add `enhancePrompt()` tests (10 cases): empty input, whitespace, successful enhancement, think tag stripping, code fence stripping, stream.text failure, stream init failure, empty LLM response, think tags with no content, combined pipeline Test count: 32 -> 48 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| onSelect: async (dialog) => { | ||
| if (!store.prompt.input.trim()) return | ||
| dialog.clear() | ||
| const original = store.prompt.input | ||
| toast.show({ | ||
| message: "Enhancing prompt...", | ||
| variant: "info", | ||
| duration: 2000, | ||
| }) | ||
| try { |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
- Explicitly consume `stream.fullStream` before awaiting `stream.text` to prevent potential hangs from Vercel AI SDK stream not being drained - Add race condition guard to manual enhance command: if user edits the prompt while enhancement is in-flight, discard the stale result - Add same guard to auto-enhance path in `submit()` for consistency - Update LLM mock to include `fullStream` async iterable Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| // Discard if user changed the prompt during enhancement | ||
| if (store.prompt.input !== inputText) return | ||
| if (enhanced !== inputText) { |
There was a problem hiding this comment.
Bug: An early return in the auto-enhance logic silently abandons prompt submission if the user edits the input while enhancement is running, orphaning the session.
Severity: HIGH
Suggested Fix
Remove the early return statement at line 629 within the submit() function. This will allow the submission process to continue with the original inputText that initiated the enhancement, preventing the silent failure and ensuring the user's prompt is processed as intended, even if they have typed additional text since.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx#L628-L630
Potential issue: When the `auto_enhance_prompt` feature is enabled, if a user modifies
their prompt text while the enhancement is in progress, the `submit()` function will
exit prematurely. This occurs because a check `if (store.prompt.input !== inputText)` at
line 629 triggers an early `return`. This happens after a new session has been created
but before the prompt is sent to the server, resulting in an orphaned session with no
message. The user receives no feedback that the submission failed, and the input text
remains in the prompt field, leading to a silent failure of a core user action.
- Use unique `MessageID`/`SessionID` instead of hardcoded `"enhance-prompt"` string for synthetic LLM calls (prevents future API validation breakage) - Add `enhancingInProgress` guard to manual enhance handler to prevent concurrent enhancement race with auto-enhance on submit - Fix auto-enhance early `return` that orphaned sessions: now uses latest user text and continues submission instead of silently abandoning it Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use unique `MessageID`/`SessionID` instead of hardcoded `"enhance-prompt"` string for synthetic LLM calls (prevents future API validation breakage) - Add `enhancingInProgress` guard to manual enhance handler to prevent concurrent enhancement race with auto-enhance on submit - Fix auto-enhance early `return` that orphaned sessions: now uses latest user text and continues submission instead of silently abandoning it Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add prompt enhancement feature Add AI-powered prompt enhancement that rewrites rough user prompts into clearer, more specific versions before sending to the main model. - Add `enhancePrompt()` utility using a small/cheap model to polish prompts - Register `prompt.enhance` TUI command with `<leader>i` keybind - Show "enhance" hint in the bottom bar alongside agents/commands - Add `prompt_enhance` keybind to config schema - Add unit tests for the `clean()` text sanitization function Inspired by KiloCode's prompt enhancement feature. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: improve enhancement prompt with research-backed approach and add auto-enhance config - Rewrite system prompt based on AutoPrompter research (5 missing info categories: specifics, action plan, scope, verification, intent) - Add few-shot examples for data engineering tasks (dbt, SQL, migrations) - Add `experimental.auto_enhance_prompt` config flag (default: false) - Auto-enhance normal prompts on submit when enabled (skips shell/slash) - Export `isAutoEnhanceEnabled()` for config-driven behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address code review findings for prompt enhancement - Add 15s timeout via `AbortController` to prevent indefinite hangs - Extract `ENHANCE_ID` constant and document synthetic `as any` casts - Fix `clean()` regex to match full-string code fences only (avoids stripping inner code blocks) - Export `stripThinkTags()` as separate utility for testability - Move auto-enhance before extmark expansion (prevents sending expanded paste content to the small model) - Add toast feedback and error logging for auto-enhance path - Update `store.prompt.input` after enhancement so history is accurate - Add outer try/catch with logging to `enhancePrompt()` - Expand tests from 9 to 30: `stripThinkTags()`, `clean()` edge cases, combined pipeline tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: handle unclosed `<think>` tags from truncated model output When the small model hits its token limit mid-generation, `<think>` tags may not have a closing `</think>`. The previous regex required a closing tag, which would leak the entire reasoning block into the enhanced prompt. Now `stripThinkTags()` matches both closed and unclosed think blocks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address remaining review findings — history, debounce, tests - Fix history storing original text instead of enhanced text by passing `inputText` explicitly to `history.append()` instead of spreading `store.prompt` which may contain stale state - Add concurrency guard (`enhancingInProgress` flag) to prevent multiple concurrent auto-enhance LLM calls from rapid submissions - Consolidate magic string into `ENHANCE_NAME` constant used across agent name, user agent, log service, and ID derivation - Add justifying comment for `as any` cast on synthetic IDs explaining why branded types are safely bypassed - Add `isAutoEnhanceEnabled()` tests (5 cases): config absent, present but missing flag, false, true, undefined - Add `enhancePrompt()` tests (10 cases): empty input, whitespace, successful enhancement, think tag stripping, code fence stripping, stream.text failure, stream init failure, empty LLM response, think tags with no content, combined pipeline Test count: 32 -> 48 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Sentry findings — stream consumption and race condition - Explicitly consume `stream.fullStream` before awaiting `stream.text` to prevent potential hangs from Vercel AI SDK stream not being drained - Add race condition guard to manual enhance command: if user edits the prompt while enhancement is in-flight, discard the stale result - Add same guard to auto-enhance path in `submit()` for consistency - Update LLM mock to include `fullStream` async iterable Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use unique `MessageID`/`SessionID` instead of hardcoded `"enhance-prompt"` string for synthetic LLM calls (prevents future API validation breakage) - Add `enhancingInProgress` guard to manual enhance handler to prevent concurrent enhancement race with auto-enhance on submit - Fix auto-enhance early `return` that orphaned sessions: now uses latest user text and continues submission instead of silently abandoning it Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
What does this PR do?
Adds an AI-powered prompt enhancement feature that uses a small/cheap model (e.g. Haiku) to rewrite rough user prompts into clearer, more specific versions before sending to the main model. Inspired by KiloCode's implementation.
How it works:
<leader>i(or selects "Enhance prompt" from command palette)Files changed:
packages/opencode/src/altimate/enhance-prompt.ts— Core enhancement logic usingProvider.getSmallModel()+LLM.stream()packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx— TUI command registration + bottom bar hintpackages/opencode/src/config/config.ts—prompt_enhancekeybind config (<leader>idefault)packages/opencode/test/altimate/enhance-prompt.test.ts— Unit tests for text cleaningType of change
Issue for this PR
Closes #143
How did you verify your code works?
bun turbo typecheck)bun test test/altimate/enhance-prompt.test.tsChecklist
🤖 Generated with Claude Code