Skip to content

feat: add AI-powered prompt enhancement#144

Merged
anandgupta42 merged 6 commits intomainfrom
feat/prompt-enhancement
Mar 15, 2026
Merged

feat: add AI-powered prompt enhancement#144
anandgupta42 merged 6 commits intomainfrom
feat/prompt-enhancement

Conversation

@anandgupta42
Copy link
Contributor

What does this PR do?

Adds an AI-powered prompt enhancement feature that uses a small/cheap model (e.g. Haiku) to rewrite rough user prompts into clearer, more specific versions before sending to the main model. Inspired by KiloCode's implementation.

How it works:

  1. User types a rough prompt like "fix the auth bug"
  2. Presses <leader>i (or selects "Enhance prompt" from command palette)
  3. A small model rewrites it into a more specific, structured prompt
  4. The enhanced version replaces the input text, ready to submit

Files changed:

  • packages/opencode/src/altimate/enhance-prompt.ts — Core enhancement logic using Provider.getSmallModel() + LLM.stream()
  • packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx — TUI command registration + bottom bar hint
  • packages/opencode/src/config/config.tsprompt_enhance keybind config (<leader>i default)
  • packages/opencode/test/altimate/enhance-prompt.test.ts — Unit tests for text cleaning

Type of change

  • New feature (non-breaking change that adds functionality)

Issue for this PR

Closes #143

How did you verify your code works?

  • Typecheck passes (bun turbo typecheck)
  • Unit tests pass (9/9) — bun test test/altimate/enhance-prompt.test.ts
  • Code follows existing patterns (modeled after title agent's small model usage)

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • New and existing unit tests pass locally with my changes

🤖 Generated with Claude Code

Comment on lines 548 to 553
}
await Session.updateMessage(summaryUserMsg)
await Session.updatePart({
id: Identifier.ascending("part"),
id: PartID.ascending(),
messageID: summaryUserMsg.id,
sessionID,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The removal of the compactionAttempts counter can lead to an infinite loop if session compaction repeatedly fails to reduce the context size sufficiently.
Severity: HIGH

Suggested Fix

Reintroduce the compactionAttempts counter and a MAX_COMPACTION_ATTEMPTS constant (e.g., set to 3). Before continuing the loop for another compaction, check if the attempt count has exceeded the maximum. If it has, break the loop and surface an error to the user.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: packages/opencode/src/session/prompt.ts#L548-L553

Potential issue: The code change removes a previously implemented safeguard that limited
consecutive session compaction attempts. Without the `compactionAttempts` counter and
`MAX_COMPACTION_ATTEMPTS` guard, a scenario can arise where a session's context
repeatedly overflows even after compaction. This will cause an infinite loop, consuming
server resources and API credits without terminating or notifying the user. While
`SessionCompaction.process()` can set an error, the calling code in `prompt.ts` does not
check for this error and unconditionally continues the loop, creating a resource
exhaustion risk.

@anandgupta42 anandgupta42 force-pushed the feat/prompt-enhancement branch from f1addf4 to 49c28a9 Compare March 15, 2026 15:56
anandgupta42 and others added 3 commits March 15, 2026 11:12
Add AI-powered prompt enhancement that rewrites rough user prompts into
clearer, more specific versions before sending to the main model.

- Add `enhancePrompt()` utility using a small/cheap model to polish prompts
- Register `prompt.enhance` TUI command with `<leader>i` keybind
- Show "enhance" hint in the bottom bar alongside agents/commands
- Add `prompt_enhance` keybind to config schema
- Add unit tests for the `clean()` text sanitization function

Inspired by KiloCode's prompt enhancement feature.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…d auto-enhance config

- Rewrite system prompt based on AutoPrompter research (5 missing info
  categories: specifics, action plan, scope, verification, intent)
- Add few-shot examples for data engineering tasks (dbt, SQL, migrations)
- Add `experimental.auto_enhance_prompt` config flag (default: false)
- Auto-enhance normal prompts on submit when enabled (skips shell/slash)
- Export `isAutoEnhanceEnabled()` for config-driven behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add 15s timeout via `AbortController` to prevent indefinite hangs
- Extract `ENHANCE_ID` constant and document synthetic `as any` casts
- Fix `clean()` regex to match full-string code fences only (avoids
  stripping inner code blocks)
- Export `stripThinkTags()` as separate utility for testability
- Move auto-enhance before extmark expansion (prevents sending
  expanded paste content to the small model)
- Add toast feedback and error logging for auto-enhance path
- Update `store.prompt.input` after enhancement so history is accurate
- Add outer try/catch with logging to `enhancePrompt()`
- Expand tests from 9 to 30: `stripThinkTags()`, `clean()` edge
  cases, combined pipeline tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@anandgupta42 anandgupta42 force-pushed the feat/prompt-enhancement branch from 49c28a9 to ea70958 Compare March 15, 2026 18:13
When the small model hits its token limit mid-generation, `<think>` tags
may not have a closing `</think>`. The previous regex required a closing
tag, which would leak the entire reasoning block into the enhanced prompt.

Now `stripThinkTags()` matches both closed and unclosed think blocks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comment on lines +622 to +632
if (enhanced !== inputText) {
inputText = enhanced
setStore("prompt", "input", enhanced)
}
}
} catch (err) {
// Enhancement failure should never block prompt submission
console.error("auto-enhance failed, using original prompt", err)
}
}
// altimate_change end

This comment was marked as outdated.

- Fix history storing original text instead of enhanced text by passing
  `inputText` explicitly to `history.append()` instead of spreading
  `store.prompt` which may contain stale state
- Add concurrency guard (`enhancingInProgress` flag) to prevent multiple
  concurrent auto-enhance LLM calls from rapid submissions
- Consolidate magic string into `ENHANCE_NAME` constant used across
  agent name, user agent, log service, and ID derivation
- Add justifying comment for `as any` cast on synthetic IDs explaining
  why branded types are safely bypassed
- Add `isAutoEnhanceEnabled()` tests (5 cases): config absent, present
  but missing flag, false, true, undefined
- Add `enhancePrompt()` tests (10 cases): empty input, whitespace,
  successful enhancement, think tag stripping, code fence stripping,
  stream.text failure, stream init failure, empty LLM response, think
  tags with no content, combined pipeline

Test count: 32 -> 48

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comment on lines +208 to +217
onSelect: async (dialog) => {
if (!store.prompt.input.trim()) return
dialog.clear()
const original = store.prompt.input
toast.show({
message: "Enhancing prompt...",
variant: "info",
duration: 2000,
})
try {

This comment was marked as outdated.

- Explicitly consume `stream.fullStream` before awaiting `stream.text`
  to prevent potential hangs from Vercel AI SDK stream not being drained
- Add race condition guard to manual enhance command: if user edits the
  prompt while enhancement is in-flight, discard the stale result
- Add same guard to auto-enhance path in `submit()` for consistency
- Update LLM mock to include `fullStream` async iterable

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@anandgupta42 anandgupta42 merged commit c189028 into main Mar 15, 2026
7 of 8 checks passed
Comment on lines +628 to +630
// Discard if user changed the prompt during enhancement
if (store.prompt.input !== inputText) return
if (enhanced !== inputText) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: An early return in the auto-enhance logic silently abandons prompt submission if the user edits the input while enhancement is running, orphaning the session.
Severity: HIGH

Suggested Fix

Remove the early return statement at line 629 within the submit() function. This will allow the submission process to continue with the original inputText that initiated the enhancement, preventing the silent failure and ensuring the user's prompt is processed as intended, even if they have typed additional text since.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx#L628-L630

Potential issue: When the `auto_enhance_prompt` feature is enabled, if a user modifies
their prompt text while the enhancement is in progress, the `submit()` function will
exit prematurely. This occurs because a check `if (store.prompt.input !== inputText)` at
line 629 triggers an early `return`. This happens after a new session has been created
but before the prompt is sent to the server, resulting in an orphaned session with no
message. The user receives no feedback that the submission failed, and the input text
remains in the prompt field, leading to a silent failure of a core user action.

anandgupta42 added a commit that referenced this pull request Mar 15, 2026
- Use unique `MessageID`/`SessionID` instead of hardcoded `"enhance-prompt"`
  string for synthetic LLM calls (prevents future API validation breakage)
- Add `enhancingInProgress` guard to manual enhance handler to prevent
  concurrent enhancement race with auto-enhance on submit
- Fix auto-enhance early `return` that orphaned sessions: now uses latest
  user text and continues submission instead of silently abandoning it

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
anandgupta42 added a commit that referenced this pull request Mar 15, 2026
- Use unique `MessageID`/`SessionID` instead of hardcoded `"enhance-prompt"`
  string for synthetic LLM calls (prevents future API validation breakage)
- Add `enhancingInProgress` guard to manual enhance handler to prevent
  concurrent enhancement race with auto-enhance on submit
- Fix auto-enhance early `return` that orphaned sessions: now uses latest
  user text and continues submission instead of silently abandoning it

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
anandgupta42 added a commit that referenced this pull request Mar 17, 2026
* feat: add prompt enhancement feature

Add AI-powered prompt enhancement that rewrites rough user prompts into
clearer, more specific versions before sending to the main model.

- Add `enhancePrompt()` utility using a small/cheap model to polish prompts
- Register `prompt.enhance` TUI command with `<leader>i` keybind
- Show "enhance" hint in the bottom bar alongside agents/commands
- Add `prompt_enhance` keybind to config schema
- Add unit tests for the `clean()` text sanitization function

Inspired by KiloCode's prompt enhancement feature.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: improve enhancement prompt with research-backed approach and add auto-enhance config

- Rewrite system prompt based on AutoPrompter research (5 missing info
  categories: specifics, action plan, scope, verification, intent)
- Add few-shot examples for data engineering tasks (dbt, SQL, migrations)
- Add `experimental.auto_enhance_prompt` config flag (default: false)
- Auto-enhance normal prompts on submit when enabled (skips shell/slash)
- Export `isAutoEnhanceEnabled()` for config-driven behavior

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address code review findings for prompt enhancement

- Add 15s timeout via `AbortController` to prevent indefinite hangs
- Extract `ENHANCE_ID` constant and document synthetic `as any` casts
- Fix `clean()` regex to match full-string code fences only (avoids
  stripping inner code blocks)
- Export `stripThinkTags()` as separate utility for testability
- Move auto-enhance before extmark expansion (prevents sending
  expanded paste content to the small model)
- Add toast feedback and error logging for auto-enhance path
- Update `store.prompt.input` after enhancement so history is accurate
- Add outer try/catch with logging to `enhancePrompt()`
- Expand tests from 9 to 30: `stripThinkTags()`, `clean()` edge
  cases, combined pipeline tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: handle unclosed `<think>` tags from truncated model output

When the small model hits its token limit mid-generation, `<think>` tags
may not have a closing `</think>`. The previous regex required a closing
tag, which would leak the entire reasoning block into the enhanced prompt.

Now `stripThinkTags()` matches both closed and unclosed think blocks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address remaining review findings — history, debounce, tests

- Fix history storing original text instead of enhanced text by passing
  `inputText` explicitly to `history.append()` instead of spreading
  `store.prompt` which may contain stale state
- Add concurrency guard (`enhancingInProgress` flag) to prevent multiple
  concurrent auto-enhance LLM calls from rapid submissions
- Consolidate magic string into `ENHANCE_NAME` constant used across
  agent name, user agent, log service, and ID derivation
- Add justifying comment for `as any` cast on synthetic IDs explaining
  why branded types are safely bypassed
- Add `isAutoEnhanceEnabled()` tests (5 cases): config absent, present
  but missing flag, false, true, undefined
- Add `enhancePrompt()` tests (10 cases): empty input, whitespace,
  successful enhancement, think tag stripping, code fence stripping,
  stream.text failure, stream init failure, empty LLM response, think
  tags with no content, combined pipeline

Test count: 32 -> 48

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Sentry findings — stream consumption and race condition

- Explicitly consume `stream.fullStream` before awaiting `stream.text`
  to prevent potential hangs from Vercel AI SDK stream not being drained
- Add race condition guard to manual enhance command: if user edits the
  prompt while enhancement is in-flight, discard the stale result
- Add same guard to auto-enhance path in `submit()` for consistency
- Update LLM mock to include `fullStream` async iterable

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
anandgupta42 added a commit that referenced this pull request Mar 17, 2026
- Use unique `MessageID`/`SessionID` instead of hardcoded `"enhance-prompt"`
  string for synthetic LLM calls (prevents future API validation breakage)
- Add `enhancingInProgress` guard to manual enhance handler to prevent
  concurrent enhancement race with auto-enhance on submit
- Fix auto-enhance early `return` that orphaned sessions: now uses latest
  user text and continues submission instead of silently abandoning it

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@anandgupta42 anandgupta42 deleted the feat/prompt-enhancement branch March 17, 2026 00:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: add AI-powered prompt enhancement

1 participant