Skip to content

feat(ui): add live status updates during agent execution#383

Open
0xhis wants to merge 2 commits intousestrix:mainfrom
0xhis:feat/tui-live-status
Open

feat(ui): add live status updates during agent execution#383
0xhis wants to merge 2 commits intousestrix:mainfrom
0xhis:feat/tui-live-status

Conversation

@0xhis
Copy link

@0xhis 0xhis commented Mar 21, 2026

Summary

Add real-time status messages to the TUI showing what each agent is doing at any given moment. Previously agents only showed "Initializing" or a generic sweep animation.

Changes

  • Add status messages during key lifecycle points: "Compressing memory...", "Waiting for LLM provider...", "Generating response...", "Executing {tools}...", "Setting up sandbox environment..."
  • Add update_agent_system_message() to Tracer for status propagation
  • Fix Text span out-of-bounds crash when merging Rich Text renderables
  • Render thinking blocks in chat history from metadata
  • Fix indented thought display in ThinkRenderer for multi-line thoughts

Files Changed

  • strix/agents/base_agent.py (+21)
  • strix/interface/tool_components/thinking_renderer.py (+2/-1)
  • strix/interface/tui.py (+63/-13)
  • strix/llm/llm.py (+19/-2)
  • strix/telemetry/tracer.py (+6)

Split from #328.

@0xhis 0xhis marked this pull request as ready for review March 21, 2026 08:03
Copilot AI review requested due to automatic review settings March 21, 2026 08:03
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 21, 2026

Greptile Summary

This PR adds real-time status messages to the TUI during agent execution, replacing the generic "Initializing" state with granular labels ("Compressing memory…", "Waiting for LLM provider…", "Generating response…", "Executing {tools}…", "Setting up sandbox environment…"). It also fixes two pre-existing bugs: a Rich Text span out-of-bounds crash when merging renderables, and a missing indentation fix for multi-line thoughts in ThinkRenderer. Additionally, thinking blocks are now rendered in chat history from message metadata.

  • Status propagation: Tracer.update_agent_system_message() is a clean, minimal addition that follows existing patterns; status is guarded against non-existent agents.
  • Span sanitization (_sanitize_text_spans): Correctly clamps span start/end to plain_len before re-applying styles, preventing the crash. Applied at both the per-item and final-merge levels.
  • Thinking blocks in chat history: _render_chat_content now extracts thinking_blocks from metadata and renders them via ThinkRenderer before the main message content.
  • Bug — interrupted path drops thinking blocks: If a message has thinking_blocks in metadata and metadata["interrupted"] == True, the early return at the interrupted branch bypasses the renderables list, silently dropping the thought content. The fix is to spread renderables into the _merge_renderables call there.
  • Defensive iteration fix: list(self.tracer.agents.values()) prevents a potential RuntimeError if the agents dict is mutated during the animation loop.

Confidence Score: 4/5

  • Safe to merge after the one-line fix to include thinking blocks in the interrupted render path.
  • All five files are straightforward and well-contained. The only concrete bug is the thinking-block loss in the interrupted branch of _render_chat_content, which requires a single-line change. Everything else — span sanitization, status message propagation, ThinkRenderer indentation, and the defensive list() copy — is correct.
  • strix/interface/tui.py — specifically the _render_chat_content interrupted branch (line 1709).

Important Files Changed

Filename Overview
strix/agents/base_agent.py Adds status messages at key lifecycle points (sandbox setup, thinking, tool execution, processing). The get_global_tracer() calls are guarded, tool-name extraction has a safe fallback, and truncation to 2 names with an overflow suffix is a sensible UX choice.
strix/interface/tool_components/thinking_renderer.py Minimal fix to indent multi-line thoughts by joining on "\n ". Correct and safe.
strix/interface/tui.py Adds span sanitization to fix out-of-bounds crash, propagates system_message to the status bar, renders thinking blocks in chat history, and converts dict.values() to list() for safe iteration. One logic bug: thinking blocks collected in renderables are silently dropped when the interrupted early-return path fires.
strix/llm/llm.py Adds granular "Compressing memory…" → "Waiting for LLM provider…" → "Generating response…" status messages. The first_chunk_received flag correctly gates the update to the first streaming chunk, and tracer is passed as a plain argument rather than captured in a closure, which is clean.
strix/telemetry/tracer.py Adds system_message field initialised to "" and update_agent_system_message() method. Simple, well-guarded, and consistent with the existing update_agent_status pattern.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: strix/interface/tui.py
Line: 1709-1715

Comment:
**Thinking blocks silently dropped on interrupted messages**

When a message has both `thinking_blocks` in its metadata and `metadata["interrupted"] == True`, the thinking block renderables collected in `renderables` are never included in the final output — the early `return` bypasses them entirely. This is a regression introduced by adding the thinking block logic above the `interrupted` check.

```suggestion
        if metadata.get("interrupted"):
            streaming_result = self._render_streaming_content(content)
            interrupted_text = Text()
            interrupted_text.append("\n")
            interrupted_text.append("⚠ ", style="yellow")
            interrupted_text.append("Interrupted by user", style="yellow dim")
            return self._merge_renderables([*renderables, streaming_result, interrupted_text])
```

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "feat(ui): add live s..."

Comment on lines 1709 to 1715
@@ -1669,7 +1714,18 @@ def _render_chat_content(self, msg_data: dict[str, Any]) -> Any:
interrupted_text.append("Interrupted by user", style="yellow dim")
return self._merge_renderables([streaming_result, interrupted_text])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Thinking blocks silently dropped on interrupted messages

When a message has both thinking_blocks in its metadata and metadata["interrupted"] == True, the thinking block renderables collected in renderables are never included in the final output — the early return bypasses them entirely. This is a regression introduced by adding the thinking block logic above the interrupted check.

Suggested change
if metadata.get("interrupted"):
streaming_result = self._render_streaming_content(content)
interrupted_text = Text()
interrupted_text.append("\n")
interrupted_text.append("⚠ ", style="yellow")
interrupted_text.append("Interrupted by user", style="yellow dim")
return self._merge_renderables([*renderables, streaming_result, interrupted_text])
Prompt To Fix With AI
This is a comment left during a code review.
Path: strix/interface/tui.py
Line: 1709-1715

Comment:
**Thinking blocks silently dropped on interrupted messages**

When a message has both `thinking_blocks` in its metadata and `metadata["interrupted"] == True`, the thinking block renderables collected in `renderables` are never included in the final output — the early `return` bypasses them entirely. This is a regression introduced by adding the thinking block logic above the `interrupted` check.

```suggestion
        if metadata.get("interrupted"):
            streaming_result = self._render_streaming_content(content)
            interrupted_text = Text()
            interrupted_text.append("\n")
            interrupted_text.append("⚠ ", style="yellow")
            interrupted_text.append("Interrupted by user", style="yellow dim")
            return self._merge_renderables([*renderables, streaming_result, interrupted_text])
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds real-time agent “what’s happening now” status messages to the TUI during execution, and hardens Rich Text merging to avoid span range crashes while also improving display of model “thinking” blocks.

Changes:

  • Propagate per-agent live status/system messages via Tracer.update_agent_system_message() and display them in the TUI status line.
  • Fix Rich Text span out-of-bounds issues by sanitizing spans when merging/embedding renderables.
  • Improve chat rendering by supporting “thinking blocks” (and fixing multi-line thought indentation).

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
strix/agents/base_agent.py Emits additional lifecycle system messages (sandbox setup, thinking, tool execution, response processing).
strix/interface/tool_components/thinking_renderer.py Adjusts thought rendering to indent multi-line thoughts.
strix/interface/tui.py Displays live system messages, sanitizes Text spans to prevent crashes, and renders thinking blocks in history.
strix/llm/llm.py Emits LLM lifecycle system messages (memory compression / provider wait / first token).
strix/telemetry/tracer.py Stores system_message per agent and exposes an update method for it.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


if thought:
text.append(thought, style="italic dim")
indented_thought = "\n ".join(thought.split("\n"))
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought.split("\n") won’t handle Windows newlines (\r\n) cleanly and can leave stray \r characters in the output. Using thought.splitlines() would normalize newline handling and match patterns used elsewhere in the interface renderers.

Suggested change
indented_thought = "\n ".join(thought.split("\n"))
indented_thought = "\n ".join(thought.splitlines())

Copilot uses AI. Check for mistakes.
Comment on lines +1696 to +1699
if "thinking_blocks" in metadata and metadata["thinking_blocks"]:
from strix.interface.tool_components.thinking_renderer import ThinkRenderer

for block in metadata["thinking_blocks"]:
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This renderer expects thinking_blocks under msg_data["metadata"], but Tracer.log_chat_message(...) is typically called without metadata for normal assistant responses (e.g., BaseAgent logs assistant messages without attaching thinking_blocks). That means these blocks likely won’t render in practice. Consider propagating thinking_blocks into tracer chat message metadata when logging assistant messages, or adjust the TUI to also read thinking_blocks from the message root if that’s where they’re stored elsewhere.

Suggested change
if "thinking_blocks" in metadata and metadata["thinking_blocks"]:
from strix.interface.tool_components.thinking_renderer import ThinkRenderer
for block in metadata["thinking_blocks"]:
# Prefer thinking_blocks from metadata, but fall back to root-level key
thinking_blocks = metadata.get("thinking_blocks") or msg_data.get("thinking_blocks")
if thinking_blocks:
from strix.interface.tool_components.thinking_renderer import ThinkRenderer
for block in thinking_blocks:

Copilot uses AI. Check for mistakes.
Add real-time status messages to the TUI showing what each agent is
doing at any given moment.

Status messages shown:
- 'Compressing memory...' during conversation history preparation
- 'Waiting for LLM provider...' during API call setup
- 'Generating response...' after first chunk received
- 'Executing {tool1}, {tool2} +N more...' during tool execution
- 'Setting up sandbox environment...' during sandbox init

Also renders thinking blocks in chat history from metadata and fixes
indented thought display for multi-line thoughts in ThinkRenderer.
@0xhis 0xhis force-pushed the feat/tui-live-status branch from 62677ca to b9474d5 Compare March 21, 2026 08:15
@0xhis 0xhis force-pushed the feat/tui-live-status branch from 2533d7b to 7a4c008 Compare March 21, 2026 08:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants