Skip to content

feat(ui): more efficient redraw#515

Open
sed-i wants to merge 5 commits intokdash-rs:mainfrom
sed-i:feature/handle-before-draw
Open

feat(ui): more efficient redraw#515
sed-i wants to merge 5 commits intokdash-rs:mainfrom
sed-i:feature/handle-before-draw

Conversation

@sed-i
Copy link
Copy Markdown
Contributor

@sed-i sed-i commented Apr 13, 2026

Problem

In the UI, when I press <d> to obtain the output of kubectl describe and scroll through the output, the response feels a bit laggy and sluggish. The lagginess is also noticeable when moving the selection up/down.

Solution

  • Handle-before-draw
  • Borrow instead clone
  • Drain pending events
  • Cache wrapped output
  • Cache rows when data unchanged

AI usage:

  • Analysis and implementation - Opus 4.6.

@sed-i
Copy link
Copy Markdown
Contributor Author

sed-i commented Apr 13, 2026

For completeness, here is the accompanying LLM output for the first commit of this PR.


1. 🔴 Draw happens before event handling (1-event visual lag)

This is the biggest architectural issue. Look at the main loop:

      /// kdash/src/main.rs#L271-L278
      // draw the UI layout
      terminal.draw(|f| ui::draw(f, &mut app))?;

      // handle events
      match event {
        event::Event::Input(key_event) => {
          info!("Input event received: {:?}", key_event);
          // quit on CTRL + C

The sequence on every keypress is:

  1. Receive Input(Down) event
  2. Draw → renders the old scroll position (wasted frame!)
  3. Handle the key → scroll state updates
  4. Wait for next event…
  5. Draw → finally shows the updated scroll position

So every scroll action is visually delayed by one event cycle. If no other events arrive quickly, that's up to 250ms (one tick) before you see the result of your keypress. This alone explains the "sluggish" feeling when pressing keys.

Fix: Move terminal.draw() to after the event handler, or ideally draw-then-handle-then-draw-again-if-changed.

Fix 1 — Draw after handle (src/main.rs)

The terminal.draw() call was moved from before event handling to after. Previously, every keypress rendered a stale frame first, causing up to 250ms visual lag. Now the rendered frame always reflects the latest state.

An initial draw before the loop was added so the UI appears immediately on startup.


2. 🔴 highlighted_lines.clone() on every render frame

    /// kdash/src/ui/utils.rs#L698-L703
    let paragraph = Paragraph::new(app.data.describe_out.highlighted_lines.clone())
      .block(block)
      .wrap(Wrap { trim: false })
      .scroll((
        app.data.describe_out.offset.min(u16::MAX as usize) as u16,
        0,
      ));

Every single render frame clones the entire Vec<Line<'static>> — that's every line of describe output, with every styled span within each line. For a kubectl describe of a pod with 200+ lines, this is a significant allocation happening 4+ times per second (on ticks) and on every keypress.

Fix: Use a borrow (&app.data.describe_out.highlighted_lines) or wrap in Rc<Vec<...>> to avoid the clone. Since Paragraph::new() accepts Text<'_> which can be borrowed, this should be achievable.

Fix 2 — Visible-slice describe rendering (src/ui/utils.rs)

Instead of highlighted_lines.clone() (cloning the entire Vec every frame), only a window of lines around the visible region is cloned:

  • slice_start = one screen-height before the current offset
  • slice_end = three screen-heights after the offset
  • The scroll offset is adjusted relative to the slice

This reduces both the clone cost and the wrapping cost from O(total_lines) to O(visible_window).


3. 🟡 Paragraph with Wrap is O(total_lines), not just visible lines

ratatui's Paragraph with Wrap { trim: false } computes word-wrapping for all lines, not just the visible viewport. For a large describe output, the paragraph widget has to:

  1. Wrap every source line into visual rows
  2. Count total visual rows
  3. Skip to the scroll offset
  4. Render only the visible rows

Steps 1–2 are O(n) on every frame, even though only ~30 rows are visible. This compounds with issue #2 (the clone).

Fix: Pre-wrap the text once (when the data arrives) and use a simpler widget for scrolling, or at minimum avoid the re-wrap when the terminal width hasn't changed.

Fix 3 — Event batching (src/main.rs + src/event/events.rs)

Added Events::try_next() (non-blocking receive). After processing the blocking event, the loop drains up to 20 additional pending events before drawing. This means rapid key-repeats (macOS sends ~30-40/sec) are coalesced into a single render pass instead of each triggering an individual draw.


4. 🟡 No event batching/coalescing

When you hold a key down, macOS key repeat sends events at ~30-40 per second. Each event triggers the full loop: lock mutex → draw (stale!) → handle event → unlock. There is no coalescing — if 5 key events have queued up, each one individually triggers a full render of stale state before processing.

Fix: After receiving an event, drain any additional pending events from the channel before drawing. Something like:

let event = events.next()?; // blocking
let mut events_batch = vec![event];
while let Ok(event) = events.try_recv() {
    events_batch.push(event);
}
// handle all events, then draw once

Fix 4 — Reduced Paragraph wrapping cost (src/ui/utils.rs)

This came free with Fix 2. Since Paragraph with Wrap computes word-wrapping for all lines passed to it, passing only the visible slice (~100-150 lines) instead of the entire document means wrapping cost dropped proportionally.


5. 🟢 Table row reconstruction on every frame

In draw_resource_table, every frame iterates all items, applies filtering, and constructs Row widgets:

    /// kdash/src/ui/utils.rs#L723-L740
    let rows: Vec<Row<'a>> = table_props
      .resource
      .items
      .iter()
      .enumerate()
      .filter_map(|(idx, c)| {
        let mapper = row_cell_mapper(c);
        if filter.is_empty() || filter_by_name(&filter, c) {
          Some((idx, mapper))
        } else {
          None
        }
      })
      .map(|(idx, row)| {
        if has_filter {
          filtered_indices.push(idx);
        }
        row
      })
      .collect();

This is typical for immediate-mode UIs and is fine for small lists, but with hundreds of pods/events, the per-frame cost adds up.

Fix 5 — Table row construction optimization (src/ui/utils.rs)

Two improvements in draw_resource_table:

  1. Filter before row construction: Previously row_cell_mapper was called for every item, even those rejected by the filter. Now the cheap filter_by_name check runs first.
  2. Viewport-aware row building: Only items in/near the visible viewport call the expensive row_cell_mapper. Off-screen rows use Row::default(). Total row count is preserved so ratatui's scroll/selection logic remains correct.

@sed-i sed-i marked this pull request as ready for review April 13, 2026 03:55
@sed-i
Copy link
Copy Markdown
Contributor Author

sed-i commented Apr 13, 2026

@deepu105 I tested this on arm64 over ssh, and there was a noticeable speedup, particularly when using a trackpad for scrolling.

@deepu105
Copy link
Copy Markdown
Contributor

deepu105 commented Apr 15, 2026 via email

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Improves UI responsiveness by reducing redraw work per frame and by batching input handling so state updates are reflected immediately in the next render.

Changes:

  • Reorders the UI loop to process events before drawing, drains pending events, and performs an initial draw on startup.
  • Reduces per-frame allocations by caching syntax-highlighted output and rendering only a window around the visible region.
  • Avoids expensive row mapping work for off-screen table rows and refactors resource title help-line construction (with new tests).

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
src/ui/utils.rs Adds highlight-cache helper, slices rendered YAML lines to a viewport window, virtualizes expensive row creation, and extracts help-line builder with tests.
src/main.rs Adds process_event, draws once before the loop, processes/drains events before draw to reduce visual lag.
src/event/events.rs Adds non-blocking try_next() for draining pending events and a unit test.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/main.rs
Comment on lines +340 to 344
// Draw the UI layout AFTER processing events so the frame is up-to-date
terminal.draw(|f| ui::draw(f, &mut app))?;

// handle events
match event {
event::Event::Input(key_event) => {
info!("Input event received: {:?}", key_event);
// quit on CTRL + C
let key = Key::from(key_event);

if key == Key::Ctrl('c') {
break;
}
// handle all other keys
handlers::handle_key_events(key, key_event, &mut app).await
}
// handle mouse events
event::Event::MouseInput(mouse) => handlers::handle_mouse_events(mouse, &mut app).await,
// handle tick events
event::Event::Tick => {
app.on_tick(is_first_render).await;
}
// handle kubeconfig file changes (live sync)
event::Event::KubeConfigChange => {
info!("Kubeconfig change detected, reloading");
app.dispatch(IoEvent::GetKubeConfig).await;
}
}

is_first_render = false;
let pending_shell_exec = app.take_pending_shell_exec();
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_first_render is set to false after the draw regardless of whether a Tick has occurred. Since App::on_tick(first_render) uses this flag to skip RefreshClient on the initial refresh, any early Input/MouseInput/KubeConfigChange before the first Tick will cause the first tick to run with first_render=false and dispatch IoEvent::RefreshClient/IoStreamEvent::RefreshClient unexpectedly (resetting app state). Consider only flipping this flag inside the Tick path (or introducing a separate has_ticked flag) so the first tick remains first_render=true even if other events arrive first.

Copilot uses AI. Check for mistakes.
Comment thread src/ui/utils.rs
Comment on lines +704 to +709
// Subtract 2 for the top-border of the block.
let view_h = area.height.saturating_sub(2) as usize;
// Take a generous window around the visible region.
let slice_start = offset.saturating_sub(view_h);
let slice_end = total.min(offset + view_h * 3);
let adjusted_offset = (offset - slice_start).min(u16::MAX as usize) as u16;
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

draw_yaml_block slices highlighted_lines using slice_start..slice_end derived from offset and total, but there is no guard that slice_start <= slice_end or that offset <= total. If highlighting yields fewer (or zero) cached lines than the scroll offset (e.g., highlight_line errors are filtered out, or line counting differs), this will panic at runtime when taking the slice. Clamp the effective offset to total.saturating_sub(1) and handle total == 0 by falling back to loading/empty rendering before slicing.

Suggested change
// Subtract 2 for the top-border of the block.
let view_h = area.height.saturating_sub(2) as usize;
// Take a generous window around the visible region.
let slice_start = offset.saturating_sub(view_h);
let slice_end = total.min(offset + view_h * 3);
let adjusted_offset = (offset - slice_start).min(u16::MAX as usize) as u16;
if total == 0 {
loading(f, block, area, app.is_loading(), app.light_theme);
return;
}
let effective_offset = offset.min(total.saturating_sub(1));
// Subtract 2 for the top-border of the block.
let view_h = area.height.saturating_sub(2) as usize;
// Take a generous window around the visible region.
let slice_start = effective_offset.saturating_sub(view_h);
let slice_end = total.min(effective_offset + view_h * 3);
let adjusted_offset = (effective_offset - slice_start).min(u16::MAX as usize) as u16;

Copilot uses AI. Check for mistakes.
Comment thread src/event/events.rs Outdated
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@deepu105
Copy link
Copy Markdown
Contributor

Hi @sed-i the change looks good. Could you address the issues raised by copilot here 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants