Skip to content

feat: FrameBuffer diff rendering, input box, cursor fix#205

Merged
Sunrisepeak merged 40 commits intomainfrom
feat/agent-tui-framebuffer-inputbox
Mar 15, 2026
Merged

feat: FrameBuffer diff rendering, input box, cursor fix#205
Sunrisepeak merged 40 commits intomainfrom
feat/agent-tui-framebuffer-inputbox

Conversation

@Sunrisepeak
Copy link
Copy Markdown
Member

Summary

  • FrameBuffer diff rendering: render to off-screen buffer, only rewrite changed lines — eliminates full-screen flicker
  • Cursor positioning fix: track prev_cursor_line_ so cursor_up moves correct distance (was overshooting by old_count - cursor_line lines, corrupting scrollback)
  • Input box component: bordered input area (╭╮╰╯│─) replacing flat separators, amber borders for approval prompts, cursor hidden in approval mode
  • Agent system expansion: CancellationToken, ContextManager 3-level cache, TokenTracker, OutputBuffer, ResourceTools, manage_tree, multi-provider support, download progress with speed/ETA

Test plan

  • rm -rf build && xmake build — compiles clean
  • xmake run xlings_tests — 209 tests pass
  • Manual: cursor blinks correctly at > inside input box
  • Manual: input box borders stay static during typing (diff rendering)
  • Manual: no scrollback corruption during LLM streaming
  • Manual: turn completion prints to scrollback without jumping
  • Manual: terminal resize triggers correct full repaint
  • Manual: approval prompt shows amber-bordered box

🤖 Generated with Claude Code

Sunrisepeak and others added 30 commits March 12, 2026 08:59
- Add FrameBuffer off-screen line buffer for diff-based rendering:
  only rewrite changed lines instead of full clear+redraw each frame
- Fix cursor positioning bug: track prev_cursor_line_ so cursor_up
  moves the correct distance (was overshooting by old_count - cursor_line)
- Fix flush_to_scrollback and loop cleanup to use tracked cursor position
- Add bordered input box component (╭╮╰╯│─) replacing flat separators
- Input box uses amber border color during approval prompts
- Hide cursor when no cursor position set (approval mode)
- Add CancellationToken, utf8 module, platform RunCommand support
- Add ContextManager 3-level cache, TokenTracker, OutputBuffer
- Add ResourceTools (RunCommand/ViewOutput/SearchContent capabilities)
- Expand agent loop with manage_tree, multi-provider support
- Add download progress with speed/ETA in tree nodes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a turn completes and the tree is printed to scrollback, the root
node title (user's message) was duplicated — already shown as the
UserMsg block above. Now TurnTree prints only state icon + duration
on a summary line, then children directly, skipping the root title.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ameters

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…mplate

Extract llm_call_worker<Provider, ProviderConfig> template before
run_one_turn to unify the Anthropic/OpenAI worker-thread blocks, which
were identical except for their provider/config types.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…t capabilities

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Create a MemoryStore per session (loaded from AgentFS) and build the
agent-specific capability registry with both memory and context tools,
shadowing the outer non-agent registry that lacks these extensions.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Bump dependency to llmapi 0.2.3 which adds cacheCreationTokens and
cacheReadTokens fields to Usage. Add cache_read_tokens/cache_write_tokens
to TurnResult, accumulate them in run_one_turn, and pass them through
to TokenTracker::record() so session cache stats are tracked correctly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds MemorySummary struct to loop.cppm and extends build_system_prompt()
to accept memory entries, appending a Remembered Context section when
memories are present (capped at 20 with overflow notice). Updates the
call site in cli.cppm to build summaries from MemoryStore.all_entries().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…in tree

Two bugs fixed:
1. complete_task() now calls mark_running_as() on the completed node's
   children, so Response/Thinking nodes under a finished task stop timing
2. Done Response nodes now show the first line of the model's reply as
   a "◆ summary" in the tree instead of being auto-hidden

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Introduce LuaSandbox (lua_engine.cppm) that lets the LLM generate Lua code
to orchestrate multiple capability calls in a single tool invocation. The
sandbox exposes pkg/sys/ver modules via upvalue trampolines into the
CapabilityRegistry, runs in a worker thread with debug.sethook for
timeout/cancel protection, and returns structured ExecutionLog JSON.

Wire execute_lua as a virtual tool in loop.cppm (same pattern as
manage_tree), inject Lua API docs into system prompt, and create the
sandbox instance in cli.cppm.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove leftover [tui] debug fprintf to stderr. Hide completed Response
nodes in tree rendering (same as Thinking/Detail) since the full reply
is already printed as AssistantText in scrollback — avoids duplicate
display.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…anges

When manage_tree creates subtasks, active_parent moves to a subtask node.
The Response node created earlier at root level was never closed because
the close logic only checked active_parent's children. Now also check
the root node to close any lingering Running Response/Thinking nodes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update system prompt to emphasize that run_command and execute_lua
should only be used when no other built-in tool can accomplish the task.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace FrameBuffer/diff rendering with print-stream + 2-line fixed
area model to eliminate root duplication bug and reduce complexity.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add initialization step, multi-line text handling, completions/approval,
download progress, flash messages, manage_tree sync, terminal resize.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace tinytui print-stream + 2-line fixed area with ftxui fullscreen
layout: scrollable history area with recursive task tree, fixed bottom
input box + status bar, mouse wheel scrolling, and 200ms timer for
real-time elapsed time display.

- tui.cppm: replace flat TaskEntry/TaskList with recursive TreeNode/TurnNode
- ftxui_tui.cppm: new AgentScreen class with render_tree_node (Unicode
  connectors ├─└─│), render_turn, render_status_bar, mouse wheel support
- loop.cppm: adapt handle_manage_tree for recursive tree with parent_id
- cli.cppm: migrate from tinytui::Screen to AgentScreen, callbacks now
  operate on tree structure instead of print_line

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implement render_input_box() that splits editor content at cursor
position, renders the cursor character with inverted style, and wraps
the input line between two separator lines for visual distinction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…nage_tree

Replace the LLM-driven task tree (manage_tree virtual tool) with a
system-driven recursive model where LLM decides execute-or-decompose
via the `decide` virtual tool, and the system handles DFS traversal,
backtracking, and failure recovery.

Key changes:
- BehaviorNode: kind→type system (TypeRoot/Decompose/Execute/DirectExec/
  ToolCall/Response), add result_summary/tool/tool_args fields
- ABehaviorTree: replace plan management (add_plan/start_plan/complete_plan)
  with system-driven API (set_root/add_child/set_state/set_result)
- loop.cppm: add run_task_tree entry point with process_node (recursive DFS),
  ask_decision, run_execute, run_tool_loop (extracted inner loop)
- DirectExec: LLM specifies tool+args at decompose time, system calls
  directly with zero LLM overhead; tool name validated against registry
- Failure recovery: children continue on failure, system adds LLM
  verification node to review results and decide final outcome
- Cross-turn context: shared conversation injected at root level (depth 0)
  only, nested nodes use sibling_results + ancestor_path
- ToolBridge: check exitCode in results, set isError when non-zero
- Context budget: use per-node input tokens instead of stale session tracker
- ftxui_tui: ⚙ icon for DirectExec nodes, type-based rendering
- prompt_builder: add build_scoped() for per-node scoped prompts

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lures

- Verify node now uses a tool-less single LLM call (just summarize,
  no fix attempts). Prompt explicitly says "Do NOT attempt to fix".
- Parent node state is Failed when any child failed, not always Done.
- Previously verify ran run_execute with full tool access, causing it
  to try fixing failures instead of reviewing them, and always returned
  Done even when summary reported failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
DirectExec nodes now display the actual tool call like:
  ⚙ search_packages("nodejs")  41ms
  ✗ use_version("nodejs", "18")  0ms
instead of the LLM-generated task title. Makes it clear which
nodes are system-direct atomic operations vs LLM-driven tasks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the 6-type node system with just 2 types:
- TypeAtom(0): has tool+args, system direct execute, zero LLM
- TypePlan(1): needs LLM, ask_decision decomposes into children

Key changes:
- Delete TypeRoot/TypeDecompose/TypeExecute/TypeToolCall/TypeResponse
- Delete run_tool_loop, run_execute, ToolLoopConfig, NodeResult
- process_node: Atom branch (direct tool call) + Plan branch
  (ask_decision → create children → DFS → re-plan if failures)
- Re-plan: after child failures, ask_decision again with results
  context. LLM can accept (done) or add new subtasks (decompose).
  Max MAX_REPLAN=3 rounds.
- decide protocol: "decompose" (split) or "done" (accept/finish)
- Atom approval: inline policy check before bridge.execute
- execute_lua: handled as special Atom in process_node
- TUI: Atom shows ⚙ tool(args), Plan shows ○/⟳/✓/✗ title
- Remove streaming/think_filter (no tool-use loop = no streaming)

Tree-shaped streaming reasoning: each ask_decision sees scoped
sibling_results, dynamic node addition = streaming reasoning in
tree form. No capability loss vs flat tool-use loop.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Decide prompt now explicitly instructs LLM to group related operations
  under Plan subtasks instead of flat Atom lists. Includes example showing
  correct decomposition pattern.
- Re-plan's new subtasks are wrapped under a visible "re-plan #N" Plan
  sub-node in the tree, making decision points visible in TUI.
- Re-plan "done" decision adds a visible marker node showing the summary.

Before: flat list of 14 Atom nodes at root level, re-plan invisible
After: hierarchical Plan groups + visible re-plan decision nodes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three fixes:
1. Available Tools section now shows inputSchema JSON so LLM knows
   exact parameter names (e.g. "target" not "name" for remove_package)
2. Decide prompt emphasizes checking sibling results before acting -
   if a sibling already shows a package is not installed, the Plan
   node should return "done" immediately instead of attempting removal
3. Re-plan prompt reframed: evaluate whether the PARENT TASK is
   complete, not whether to retry. "remove failed because package
   wasn't installed = task done" → prefer "done" over "decompose"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…hecks

1. synthesize_children now includes Atom node results (tool: summary)
   instead of discarding them. This fixes sibling_results being useless
   "2/2 tasks completed" instead of actual tool output.
2. Decide prompt now says "Do NOT add check-if-installed steps before
   operations" and shows a direct Atom example with correct parameter
   names, encouraging the LLM to just act and let re-plan handle failures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comprehensive documentation covering:
- Two-type node system (Atom + Plan)
- process_node recursive DFS flow
- Re-plan mechanism with visible markers
- Context passing (conversation, sibling_results, ancestor_path)
- ABehaviorTree thread-safe API
- TUI rendering (icons, colors, tree structure)
- Full data flow diagram

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. Capture LLM reasoning text (before decide tool call) in Decision
   struct. Store it in Plan node's detail field so it's visible in
   the tree as context for what the LLM decided and why.
2. TUI: show Plan node detail as a dim secondary line when the node
   is terminal and has no children (leaf Plan with reasoning).
3. Final reply (◆): add a summary LLM call after tree completes to
   generate a human-readable response instead of mechanical
   "3/3 tasks completed" synthesis. Uses on_stream_chunk for
   real-time streaming to TUI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1. ask_decision now streams LLM output via on_stream_chunk, so the
   TUI shows real-time "thinking..." text in the ◆ reply area while
   the LLM reasons about how to decompose.
2. After decompose, a ◇ thinking node is inserted as the first child
   of each Plan node, showing the LLM's decision reasoning (truncated
   to ~80 chars, single line). Uses cyan color to distinguish from
   Atom (⚙) and Plan (○) nodes.
3. on_stream_chunk restored in cli.cppm TreeConfig for streaming.

TUI display:
  ├─ ◇ 需要先搜索版本信息,然后安装第二新版本...  ← cyan, thinking
  ├─ ⚙ search_packages("d2x")                     ← green, Atom
  └─ ○ 安装第二新版本                               ← amber, Plan

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sunrisepeak and others added 10 commits March 15, 2026 00:50
LLMs often don't output text before tool calls, so response.text()
was empty and thinking nodes never appeared. Fix: add "thinking"
as a required field in the decide tool schema. LLM must provide its
reasoning inside the tool args, which is reliably parsed from JSON.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Critical context fix: execute_pending_children now inherits the
parent's sibling_results (ancestor context) so nested Plan nodes
can see results from upper levels. Previously each nesting level
lost all parent context, causing the LLM to re-search for information
that was already found at a higher level.

Also:
- Clear streaming text after each ask_decision to prevent accumulation
- Increase thinking node display from 80 to 120 chars
- Filter out __thinking__ nodes from sibling_results

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Code-level execution flow covering:
- cli.cppm → run_task_tree → process_node call chain
- Atom execution path (approval, execute_lua, bridge.execute)
- Plan decision path (ask_decision, scoped prompt assembly)
- Re-plan loop with visible marker nodes
- Context data flow (conversation injection, sibling inheritance)
- TUI update timing sequence (worker ↔ main thread)
- Disconnected components inventory (ContextManager断路)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Proposes reducing 19 modules (4499 lines) to 11 modules (~3100 lines):
- Delete 8 disconnected modules (context_manager, prompt_builder,
  resource_cache, resource_tools, package_tools, mcp_client,
  output_buffer, token_tracker) totaling 1370 lines
- Replace ContextManager 3-level cache with simple conversation
  auto-compact in run_task_tree
- Merge token tracking into TreeResult
- Compress Atom result_summary from raw JSON

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove 10 obsolete agent modules (lua_engine, mcp_client, output_buffer,
  package_tools, prompt_builder, resource_cache, resource_tools, llm_config,
  ftxui_tui, old tui) — replaced by llm, prompt, runtime, tui/screen, tui/state
- Add thinking nodes to behavior tree for LLM reasoning display
- Unify quiet mode: replace separate exec_quiet/log::set_quiet with single
  platform::set_tui_mode() flag checked by exec(), log, and download renderer
- Add agent architecture docs and plans

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Agent TUI: show real-time download progress on running tool nodes (↓ done/total pct%)
- Fix download progress bar jumping 0→100% when some packages have unknown sizes
  (fall back to count-based progress when byte sizes unavailable)
- Unify downloads: use tinyhttps streaming for all paths (remove curl subprocess)
- Fix CLI progress bar leaking into agent TUI on Ctrl+C (cleanup order race)
- Fix tuiThread writing ANSI cursor codes to stdout in TUI mode
- Guard dispatch_data_event against TUI mode for download_progress events
- Use tinyhttps download_to_file() for incremental progress callbacks

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Show each downloading file as a child node with aligned progress bar
  (█ filled cyan, ░ unfilled dark) under the install tool node
- Install node shows total progress (↓ pct%)
- Pass isCancelled through tinyhttps for responsive ESC abort during download
- Clean up download_files on tool completion

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…vements

- Download files are real BehaviorNode (TypeDownload) with progress bars
  that persist in the tree after completion
- Fix node ID collision: use IdAllocator for download nodes
- ESC interrupts downloads (pause → cancel downloader via isCancelled)
- Show "interrupted by ESC" in red with hint text
- Ctrl+C × 3 exit: ForceHandleCtrlC(false), counter hint in status bar,
  auto-clear after 1.5s timeout, only character input resets count
- Tree connectors (├─ └─ │) styled with border_color
- Paste debounce: suppress submit when Return arrives <5ms after chars

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add platform::Icon with ASCII fallback for Windows (Unicode on Linux/macOS)
- Unify icon references: theme.cppm and screen.cppm use platform::Icon
- Optimize tool descriptions: clarify parameter formats (plain name vs
  namespace:name), add fuzzy match hint to search_packages
- Add download speed to agent TUI total progress (↓ 45% 3.2 MB/s)
- Fix download nodes: set Done state on creation for already-finished files
- Fix slash command completion: Enter auto-fills selected completion
- Paste debounce: prevent multi-line paste from triggering submit

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Sunrisepeak Sunrisepeak merged commit cbbbedb into main Mar 15, 2026
3 checks passed
@Sunrisepeak Sunrisepeak deleted the feat/agent-tui-framebuffer-inputbox branch March 15, 2026 04:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant