Skip to content

refactor(picker): unify list pickers onto async stream, fix load-more block#306

Merged
barrettruth merged 3 commits intomainfrom
fix/async-load-more
Apr 17, 2026
Merged

refactor(picker): unify list pickers onto async stream, fix load-more block#306
barrettruth merged 3 commits intomainfrom
fix/async-load-more

Conversation

@barrettruth
Copy link
Copy Markdown
Owner

Problem

Pressing <cr> on the Load more... row in a list picker froze nvim's entire event loop until the underlying CLI (gh run list, glab mr list, tea actions runs list, gh release list) returned. The load-more path called fetch_json_now, which bottomed out at vim.system(...):wait() — a full synchronous wait. LSP callbacks, async IO, and UI redraws all stalled.

The picker already had an async streaming architecture for the initial fetch (picker_session.pick_json + fzf-lua's libuv pipe source command), but the load-more action bypassed it because the reload flow needed current_prs / current_runs / etc. updated before fzf re-invoked the source.

Solution

Unify all four list pickers (pr, issue, ci, release) onto a single cache-aware async stream function, and drop the sync fetch helper.

The key realization is that fzf-lua's stringify pipe wrapper keeps the source command open until our Lua side writes nil to the emit callback — it doesn't close when the Lua function returns. So a stream function can:

  1. Emit synchronously from cache (fast, no IO) when data is fresh.
  2. Call picker_session.request_json with an async callback, return early, and let the callback emit rows into the still-open pipe when the fetch completes. Nvim's event loop keeps pumping the whole time.

Picker surface

  • forge.PickerOpts.entry_source and initial_stream_only are gone. picker.pick now just takes stream, which is invoked on every source-command run (initial open and every reload). tracked and reload = true are both driven by stream ~= nil.
  • picker_session.request_json is now public, so list pickers can kick off async fetches directly from their stream closures.

Per-picker state

Each list picker holds three locals: current_<kind>s (cache), current_limit (visible limit), <kind>s_stale (bool). The stream function checks the cache flag and either emits synchronously or fetches. The load-more action only bumps current_limit and sets <kind>s_stale = true; fzf's reload then re-enters the stream and the async fetch runs without blocking nvim.

Scope & ordering fixes

maybe_prefetch_next is now forward-declared before the stream closures that call it (caught by you — the old declaration order crashed with attempt to call global 'maybe_prefetch_next' (a nil value) when the callback finally fired). Prefetch runs inside the stream success callback or, for cache-hit opens, right after picker.pick — no longer before the main fetch where it could race the initial response.

Picker backend

Tests

  • spec/fzf_picker_spec.lua: the tests that exercised entry_source / initial_stream_only are rewritten against the stream-only contract. Tracked layout is now <track_id>\t<text>\t<index>[\t<header>] on every live picker.
  • spec/pickers_spec.lua: the picker.pick stub wraps opts.stream so invoking it populates captured.entries through a capturing emit, resetting entries per invocation so stale-response tests can drive two fetches in a row without state bleed. Cache-hit tests drive the stream explicitly via captured.stream(function() end). Load-more tests drive the stream once more after the action fn bumps the limit and assert the second request_json fires.
  • spec/helpers.lua: gains a drive_stream convenience and updates action_labels to resolve function labels.

LuaCATS

All new stream_*, emit_cached*, and the extended render_line / sanitize_header signatures gained proper ---@param / ---@return annotations so lua-language-server stays sharp at the new call sites.

Verification

Full scripts/ci.sh green: stylua, selene, prettier, nix fmt, lua-language-server, vimdoc-language-server, and 502 busted tests.

Not verified: real-runtime behavior of the reload-reload-fetch sequence in a live picker against a genuinely slow CLI. The streaming architecture is the same one the initial fetch has been using for a while, so the machinery itself is exercised, but the specific "press Load more, keep nvim responsive, entries appear asynchronously" round-trip should be smoke-tested once before merging.

… block

Pressing <cr> on "Load more..." used to call fetch_json_now which
issued vim.system():wait() synchronously, blocking nvim's entire
event loop until the underlying CLI (gh run list, glab mr list, etc.)
returned. The picker already had an async streaming architecture
for the initial fetch, but load-more bypassed it.

Unify all four list pickers (pr, issue, ci, release) onto a single
cache-aware async stream function:

- Drop forge.PickerOpts.entry_source and initial_stream_only. The
  picker backend now calls opts.stream on every invocation of the
  source command (initial open and every reload). The tracked
  (fzf --track) and reload flags are derived from stream ~= nil.
- Expose picker_session.request_json so picker closures can start
  their own async fetches.
- Each list picker holds three locals: current_<kind>s (cache),
  current_limit (visible limit), <kind>s_stale (bool). The stream
  function emits from cache synchronously when data is fresh, or
  kicks off request_json and emits from the async callback when it
  is stale. fzf-lua's pipe-based stringify keeps the source command
  open until the final emit(nil), so the callback can write new
  rows into the same reload without blocking nvim.
- The load-more action now just bumps current_limit and sets the
  stale flag; fzf's reload re-enters the stream which handles the
  async fetch. No more fetch_json_now, no more vim.system():wait().
- Move maybe_prefetch_next above its caller and fire it only after
  the main fetch succeeds (or when a seeded cache already fills the
  picker) so the next-state prefetch does not race the main fetch.

Picker-side cleanup:
- picker/fzf.lua: entries is reset per lines() invocation; the emit
  wrapper increments next_index and honors track_redirect; line
  rendering keeps the hidden header column appended at the tail so
  the {3}/{2} field_index reload binds continue to resolve the index.
- picker/session.lua: pick_json still supports the older caller
  shape (cmd + build_entries + open), but now prefers opts.stream
  when the caller constructs a cache-aware stream directly.

Tests:
- spec/fzf_picker_spec.lua: rewrite the entry_source / initial-
  stream-only tests to exercise the stream-only contract. Tracked
  layout is now <track_id>\t<text>\t<index>[\t<header>] for every
  live picker.
- spec/pickers_spec.lua: the picker.pick stub wraps opts.stream so
  invoking it populates captured.entries through a capturing emit,
  resetting entries per invocation so stale-response tests can drive
  two fetches in a row without duplicate state. Cache-hit tests
  drive the stream explicitly via captured.stream(function() end).
  Load-more tests drive the stream once more after the action fn
  bumps the limit and assert the second request_json fires.
- spec/helpers.lua: gains a drive_stream convenience and carries
  action_labels across both string and function labels.

LuaCATS: annotate the new stream / emit helpers and the extended
render_line and sanitize_header signatures so lua-language-server
stays sharp at the new call sites.
Three of the vim.wait predicates in fzf_picker_spec existed only
because the fzf-wrapped action_fn uses vim.schedule when the action
closes. Marking the test actions as close = false makes action_fn
take the direct-call path, so the assertion runs synchronously and
the vim.wait comes out.

The fourth vim.wait (the release load-more test) stays because it is
genuinely waiting on a vim.system -> vim.schedule -> emit chain that
cannot be made synchronous without compromising request_json.

Also drop the drive_stream helper from spec/helpers.lua. It was added
during the refactor but no caller ended up using it; tests invoke
captured.stream(function() end) inline where they need to.
@barrettruth barrettruth merged commit 0280aa3 into main Apr 17, 2026
8 checks passed
@barrettruth barrettruth deleted the fix/async-load-more branch April 17, 2026 23:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant