Skip to content

feat(extract): allow changing LLM model before rerunning extraction#560

Merged
cpcloud merged 1 commit intomainfrom
worktree-shimmying-coalescing-tulip
Feb 28, 2026
Merged

feat(extract): allow changing LLM model before rerunning extraction#560
cpcloud merged 1 commit intomainfrom
worktree-shimmying-coalescing-tulip

Conversation

@cpcloud
Copy link
Copy Markdown
Collaborator

@cpcloud cpcloud commented Feb 27, 2026

Summary

  • Pressing r on a completed LLM step opens an inline model picker instead of immediately rerunning
  • Fuzzy filter by typing, navigate with arrow keys, select with Enter, dismiss with Esc
  • Selecting a local model switches the extraction client and reruns; non-local models trigger a pull first
  • Auto-reruns extraction after pull completes when the overlay is still open
  • Filter prompt shows typed text with a blinking block cursor
  • Hint text updated from "r to rerun" to "r model"
  • Refactors renderModelCompleter and refilterCompleter into reusable functions shared between chat and extraction
  • Adds BlinkCursor style to the styles singleton
  • 10 user-flow tests covering picker activation, dismissal, selection, filtering, navigation, step preservation, and guard rails

closes #510

Pressing r on a completed LLM step now opens an inline model picker
instead of immediately rerunning. Users can fuzzy-filter, navigate with
arrow keys, and select a model -- the extraction client is switched and
the LLM step reruns with the chosen model. Non-local models trigger a
pull before rerunning.

Refactors renderModelCompleter and refilterCompleter into reusable
functions shared between chat and extraction overlays.

closes #510

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@cpcloud cpcloud force-pushed the worktree-shimmying-coalescing-tulip branch from 7239016 to 3b7c6ab Compare February 28, 2026 10:57
@cpcloud cpcloud merged commit 680efde into main Feb 28, 2026
15 checks passed
@cpcloud cpcloud deleted the worktree-shimmying-coalescing-tulip branch February 28, 2026 11:03
cpcloud added a commit that referenced this pull request Mar 19, 2026
…560)

## Summary

- Pressing `r` on a completed LLM step opens an inline model picker
instead of immediately rerunning
- Fuzzy filter by typing, navigate with arrow keys, select with Enter,
dismiss with Esc
- Selecting a local model switches the extraction client and reruns;
non-local models trigger a pull first
- Auto-reruns extraction after pull completes when the overlay is still
open
- Filter prompt shows typed text with a blinking block cursor
- Hint text updated from "r to rerun" to "r model"
- Refactors `renderModelCompleter` and `refilterCompleter` into reusable
functions shared between chat and extraction
- Adds `BlinkCursor` style to the styles singleton
- 10 user-flow tests covering picker activation, dismissal, selection,
filtering, navigation, step preservation, and guard rails

closes #510

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(extract): allow changing LLM model before rerunning extraction

1 participant