Skip to content

fix(#980-bug8): chat/send warns when no AI persona exists to listen#994

Merged
joelteply merged 1 commit into
canaryfrom
mac/980-bug8-no-listener-warning
May 2, 2026
Merged

fix(#980-bug8): chat/send warns when no AI persona exists to listen#994
joelteply merged 1 commit into
canaryfrom
mac/980-bug8-no-listener-warning

Conversation

@joelteply
Copy link
Copy Markdown
Contributor

Carl's #980 Bug 8: chat/send returned success even when zero AI personas exist → user typed message, got nothing back, no signal. Now warns clearly in the success message when no listener.

🤖 Generated with Claude Code

Carl's #980 Bug 8: chat/send accepted messages + returned success even
when zero AI personas exist in the system. Cascade from seed-failure:
no personas seeded → agent/list returns [] → user types "hello", gets
nothing back, no signal anywhere.

Cheap probe (limit 1) for persona-type users; warn in result message
when count is zero. Message is still stored (non-blocking on result),
but the user gets a clear "stored but no listener" hint with a
diagnostic command + re-seed pointer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@joelteply joelteply merged commit 768a53d into canary May 2, 2026
3 checks passed
@joelteply joelteply deleted the mac/980-bug8-no-listener-warning branch May 2, 2026 02:13
joelteply added a commit that referenced this pull request May 2, 2026
…#1000)

Per Joel's "100% free OOTB on MacBook Air on up, canary e2e working
from curl, Carl's case" — the existing smoke probe only validates the
page renders, not that a chat actually gets an AI reply. That's the
true Carl-impact gate: if Carl types "hello" + gets nothing, the
install isn't shippable, regardless of whether /health returned 200.

This extends the smoke script with a 4th phase:

  4. End-to-end chat:
     - Locate jtag binary (3 search paths)
     - Send a unique probe message to #general
     - Detect #994's "no listener" warning → exit 6 (distinct failure)
     - Poll chat/export for an AI reply (default 90s timeout)
     - On reply: report latency in PASS banner
     - On timeout: list root-cause diagnostic commands per #964/#980 series

Exit codes (extends 0-3 from existing):
  4 — chat/send command failed (system not ready for chat at all)
  5 — no AI reply within timeout (the main Carl-blocker shape — silent AI)
  6 — chat/send accepted but reported NO PERSONAS (#994 warning)
      — distinct from 5: "no AI" vs "AI didn't respond"

CARL_CHAT_TIMEOUT_SEC env override (default 90s) for slow first-runs
where DMR is cold-loading the persona model.

The diagnostic message on exit 5 lists the post-#980 fix points so a
future regression has an obvious starting checklist:
  - #997's 'local' default routing (cloud fallback dropped)
  - DMR running (Docker Desktop 4.62+ check from install.sh)
  - GPU EP cfg (#985/#991 fixed broken cfg gates)
  - Persona model pulled into DMR
  - NEW-A SIGABRT (tracked upstream as ggml-org/llama.cpp#22593)

Now CI's carl-install-smoke gate proves the OOTB chain works
end-to-end, not just up to the page render.

Co-authored-by: Test <test@test.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant