Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
dc1318d
Fix Agent UI docs: correct CLI commands, API method names, and missin…
kovtcharov Mar 18, 2026
d826a93
Agent UI polish: refined typography, glassmorphism styling, and eval …
kovtcharov Mar 18, 2026
0788c8b
Agent UI: terminal-style animations with pixelated red cursor
kovtcharov Mar 18, 2026
eeb0283
Fix Black formatting in file_tools.py
kovtcharov Mar 18, 2026
e3f7396
Agent UI polish: streaming transitions, design consistency, and final…
kovtcharov Mar 19, 2026
1306665
Fix broken hardware query: add Windows/Linux system info commands to …
kovtcharov Mar 19, 2026
8fd61d0
Advanced UI animations: modal exits, delete transitions, and session …
kovtcharov Mar 19, 2026
4062772
Update default model to Qwen3.5-35B-A3B and improve network query hints
kovtcharov Mar 19, 2026
42db71e
Fix session list disappearing from sidebar during backend glitches
kovtcharov Mar 19, 2026
0243069
Fix false positive LLM health check banner under heavy load
kovtcharov Mar 19, 2026
b203fa4
Agent UI: thinking display, Lemonade stats, model override, security …
kovtcharov Mar 19, 2026
e17bf72
Fix thinking display: single cursor, no flash, smoother animations
kovtcharov Mar 19, 2026
c994caf
Remove dead .msg-entering CSS, fix thinking indicator light theme
kovtcharov Mar 19, 2026
66c6628
Fix unit test: update default model assertion to Qwen3.5-35B-A3B-GGUF
kovtcharov Mar 19, 2026
94d6fda
Fix SSE handler tests: start_progress emits status, not thinking
kovtcharov Mar 19, 2026
37f9672
Stable thinking toolbar: no visual changes on state transitions
kovtcharov Mar 19, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 16 additions & 1 deletion docs/guides/agent-ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,17 @@ GAIA Agent UI is a desktop interface for running AI agents **100% locally** on y
**Ready to install?** See the [Quickstart](/quickstart#agent-ui-fastest) for installation instructions.
</Info>

<Warning>
**Tested Configuration:** The Agent UI has been tested exclusively on **AMD Ryzen AI MAX+ 395** processors running the **Qwen3-Coder-30B-A3B-Instruct-GGUF** model via Lemonade Server. Other hardware or model combinations may work but are not officially verified.

If you encounter issues on a different configuration, please [open a GitHub issue](https://github.com/amd/gaia/issues/new) and include:
- Your processor model (e.g., Ryzen AI 9 HX 370, Ryzen AI MAX+ 395)
- RAM and available memory
- The LLM model you are using
- Operating system and version
- Steps to reproduce the issue
</Warning>

---

## What You Can Do
Expand Down Expand Up @@ -74,7 +85,11 @@ See the [Agent UI MCP Server guide](/guides/mcp/agent-ui) for setup instructions

<Accordion title="Port 4200 already in use">
```bash
gaia --ui --ui-port 8080
# npm CLI
gaia-ui --port 8080

# Python CLI
gaia --ui-port 8080
```
</Accordion>

Expand Down
1,414 changes: 1,414 additions & 0 deletions docs/plans/agent-ui-eval-benchmark.md

Large diffs are not rendered by default.

22 changes: 14 additions & 8 deletions docs/sdk/sdks/agent-ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ from gaia.ui.models import SystemStatus, ChatRequest, SessionResponse, DocumentR

**See also:** [User Guide](/guides/agent-ui) | [Agent SDK](/sdk/sdks/chat) | [API Specification](/spec/agent-ui-server)

<Warning>
**Tested Configuration:** The Agent UI has been tested on **AMD Ryzen AI MAX+ 395** with **Qwen3-Coder-30B-A3B-Instruct-GGUF**. Other configurations are not officially verified. See the [User Guide](/guides/agent-ui) for full details and how to report issues on other hardware.
</Warning>

---

## Overview
Expand Down Expand Up @@ -346,6 +350,7 @@ class MessageResponse(BaseModel):
content: str
created_at: str
rag_sources: Optional[List[SourceInfo]] = None
agent_steps: Optional[List[AgentStepResponse]] = None

class MessageListResponse(BaseModel):
messages: List[MessageResponse]
Expand All @@ -365,6 +370,7 @@ class DocumentResponse(BaseModel):
indexed_at: str
last_accessed_at: Optional[str] = None
sessions_using: int = 0
indexing_status: str = "complete" # pending | indexing | complete | failed | cancelled | missing

class DocumentListResponse(BaseModel):
documents: List[DocumentResponse]
Expand Down Expand Up @@ -859,8 +865,8 @@ from gaia.rag.sdk import RAGSDK, RAGConfig

config = RAGConfig()
rag = RAGSDK(config)
result = rag.index_file(filepath)
chunk_count = result.get("chunk_count", 0)
result = rag.index_document(filepath)
chunk_count = result.get("num_chunks", 0)
```

---
Expand All @@ -873,16 +879,16 @@ GAIA Agent UI is also available as an npm package for quick installation:
npm install -g @amd-gaia/agent-ui
```

This provides the `gaia` CLI command:
This provides the `gaia-ui` CLI command:

```bash
gaia # Start Python backend + open browser
gaia --serve # Serve frontend only (Node.js static server)
gaia --port 8080 # Custom port
gaia --version # Show version
gaia-ui # Start Python backend + open browser
gaia-ui --serve # Serve frontend only (Node.js static server)
gaia-ui --port 8080 # Custom port
gaia-ui --version # Show version
```

On first run, `gaia` automatically installs the Python backend (uv, Python 3.12, amd-gaia) if not already present.
On first run, `gaia-ui` automatically installs the Python backend (uv, Python 3.12, amd-gaia) if not already present. On subsequent runs, it auto-updates if the version doesn't match.

### Package Contents

Expand Down
18 changes: 14 additions & 4 deletions src/gaia/agents/base/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -1933,7 +1933,9 @@ def process_query(

# Return error response
final_answer = (
f"Unable to complete task due to LLM server error: {str(e)}"
f"I'm having trouble reaching the language model right now. "
f"Please make sure Lemonade Server is running.\n\n"
f"*Technical details: {str(e)}*"
)
break
except Exception as e:
Expand All @@ -1950,7 +1952,9 @@ def process_query(

# Return error response
final_answer = (
f"Unable to complete task due to streaming error: {str(e)}"
f"Sorry, I ran into a problem while processing your request. "
f"This might be a temporary issue — try again in a moment.\n\n"
f"*Technical details: {str(e)}*"
)
break
else:
Expand Down Expand Up @@ -2004,7 +2008,9 @@ def process_query(

# Return error response
final_answer = (
f"Unable to complete task due to LLM server error: {str(e)}"
f"I'm having trouble reaching the language model right now. "
f"Please make sure Lemonade Server is running.\n\n"
f"*Technical details: {str(e)}*"
)
break
except Exception as e:
Expand All @@ -2019,7 +2025,11 @@ def process_query(
)

# Return error response
final_answer = f"Unable to complete task due to error: {str(e)}"
final_answer = (
f"Sorry, I ran into an unexpected problem. "
f"This might be a temporary issue — try again in a moment.\n\n"
f"*Technical details: {str(e)}*"
)
break

# Stop the progress indicator
Expand Down
Loading
Loading