Conversation
📝 WalkthroughWalkthroughThe PR adds ACP (Agent Communication Protocol) backend support to the daemon. New CLI flags enable ACP configuration. A new acp module implements session management and JSON-RPC communication with external ACP processes. The daemon integrates ACP to conditionally prompt and publish AI-generated replies for incoming messages. Changes
Sequence DiagramsequenceDiagram
participant User as User/Network
participant Daemon as Daemon
participant ACP as ACP Backend<br/>(External Process)
participant Relay as Relay Network
User->>Daemon: Message arrives (MessageReceived)
Daemon->>Daemon: should_prompt_acp_reply()?
alt ACP Backend Present & Valid Message
Daemon->>Daemon: build_acp_prompt(conversation_context)
Daemon->>ACP: session/prompt (JSON-RPC)
ACP->>ACP: Process prompt
ACP-->>Daemon: session/update notifications<br/>(text chunks)
Daemon->>Daemon: Accumulate text via sink
ACP-->>Daemon: JSON-RPC response<br/>(final_text, session_id)
Daemon->>Daemon: Prepare ACP reply
Daemon->>Relay: Publish reply message
Relay->>User: Deliver reply
else No ACP or Skip Condition
Daemon->>Relay: Process as normal
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
crates/pikachat-sidecar/src/daemon.rs (1)
3737-3803: ACP prompting runs synchronously in the message loop.The ACP prompt/response cycle runs inline (awaited) within the notification handling branch. While a slow or unresponsive ACP backend could block processing of other incoming messages, the comment at lines 3745-3746 acknowledges this is "MVP" behavior.
Consider extracting ACP reply generation to a spawned task in a follow-up to prevent blocking the main message loop during long agent responses.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/pikachat-sidecar/src/daemon.rs` around lines 3737 - 3803, The ACP prompt/response cycle in the notification handler runs inline and can block the main message loop; refactor by spawning an asynchronous task to handle ACP reply generation and publishing instead of awaiting inside the loop: extract the block that builds prompt, calls acp.prompt_conversation(...).await, prepares the outbound action via host.prepare_outbound_action, and calls host.publish_prepared into a spawned task (e.g., tokio::spawn) so the message loop continues; ensure you clone or Arc/clone any captured values (acp, acp_nostr_group_id, acp_sender_hex, acp_content, host) and handle/emit the same warn logs for failures inside the spawned task, and adjust types (make host/acp Send + 'static or wrap in Arc/Mutex) as needed.crates/pikachat-sidecar/src/acp.rs (2)
111-166: Child process crash leaves manager in broken state.When the ACP backend process exits or crashes,
fail_all_pendingis called (line 143-156), but subsequent calls toprompt_conversationwill fail with "ACP backend stdout closed" or similar errors. There's no restart mechanism.This is likely acceptable for MVP (the daemon can be restarted), but consider:
- Surfacing a clear "ACP backend exited" error on subsequent prompts
- Adding a health-check or reconnect capability as a follow-up
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/pikachat-sidecar/src/acp.rs` around lines 111 - 166, The spawn function currently calls fail_all_pending when the ACP child exits but does not record that the backend has terminated, so subsequent calls like prompt_conversation keep attempting I/O and surface generic "ACP backend stdout closed" errors; add a clear stopped-state and early-check in prompt_conversation. Concretely: add a flag/atomic (e.g., AtomicBool stopped or enum BackendState) as a field on the ACP struct, set it to true inside the spawned reader task before calling fail_all_pending (and after child.wait()), and update prompt_conversation to check that flag and immediately return a clear, deterministic error like "ACP backend exited" instead of proceeding with I/O; this keeps fail_all_pending semantics and enables later health-check/restart code to use the state.
103-108: Consider adding session eviction for long-running daemons.The
sessions_by_conversationHashMap grows without bound as new conversations are created. For long-running daemons with many ephemeral conversations, this could lead to unbounded memory growth.Consider adding an eviction policy (e.g., LRU with a cap, or TTL-based cleanup) as a follow-up improvement. Based on learnings from similar caches in this codebase: "ensure a clear eviction policy and a sane MAX_PROCESSED_IDS limit" as a follow-up improvement, not a blocker.
Also applies to: 289-300
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/pikachat-sidecar/src/acp.rs` around lines 103 - 108, sessions_by_conversation in AcpJsonRpcClient is an unbounded HashMap that can grow without bound for long-running daemons; add an eviction policy by replacing the HashMap with a bounded cache (e.g., lru::LruCache or moka/ttl cache) or wrap it with a TTL cleanup background task: change the field holding sessions (refer to sessions_by_conversation and AcpJsonRpcClient) to a concurrent, capacity-limited cache, enforce a sane MAX_PROCESSED_IDS cap where processed IDs are tracked (adjust MAX_PROCESSED_IDS or processed_ids usage around the 289-300 area), and add a periodic cleanup or eviction-on-insert strategy to drop oldest/expired conversation sessions to prevent unbounded memory growth. Ensure thread-safety matches existing Mutex/Atomic usage and update code that accesses sessions_by_conversation/text_chunks to use the new cache API.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@crates/pikachat-sidecar/src/acp.rs`:
- Around line 111-166: The spawn function currently calls fail_all_pending when
the ACP child exits but does not record that the backend has terminated, so
subsequent calls like prompt_conversation keep attempting I/O and surface
generic "ACP backend stdout closed" errors; add a clear stopped-state and
early-check in prompt_conversation. Concretely: add a flag/atomic (e.g.,
AtomicBool stopped or enum BackendState) as a field on the ACP struct, set it to
true inside the spawned reader task before calling fail_all_pending (and after
child.wait()), and update prompt_conversation to check that flag and immediately
return a clear, deterministic error like "ACP backend exited" instead of
proceeding with I/O; this keeps fail_all_pending semantics and enables later
health-check/restart code to use the state.
- Around line 103-108: sessions_by_conversation in AcpJsonRpcClient is an
unbounded HashMap that can grow without bound for long-running daemons; add an
eviction policy by replacing the HashMap with a bounded cache (e.g.,
lru::LruCache or moka/ttl cache) or wrap it with a TTL cleanup background task:
change the field holding sessions (refer to sessions_by_conversation and
AcpJsonRpcClient) to a concurrent, capacity-limited cache, enforce a sane
MAX_PROCESSED_IDS cap where processed IDs are tracked (adjust MAX_PROCESSED_IDS
or processed_ids usage around the 289-300 area), and add a periodic cleanup or
eviction-on-insert strategy to drop oldest/expired conversation sessions to
prevent unbounded memory growth. Ensure thread-safety matches existing
Mutex/Atomic usage and update code that accesses
sessions_by_conversation/text_chunks to use the new cache API.
In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 3737-3803: The ACP prompt/response cycle in the notification
handler runs inline and can block the main message loop; refactor by spawning an
asynchronous task to handle ACP reply generation and publishing instead of
awaiting inside the loop: extract the block that builds prompt, calls
acp.prompt_conversation(...).await, prepares the outbound action via
host.prepare_outbound_action, and calls host.publish_prepared into a spawned
task (e.g., tokio::spawn) so the message loop continues; ensure you clone or
Arc/clone any captured values (acp, acp_nostr_group_id, acp_sender_hex,
acp_content, host) and handle/emit the same warn logs for failures inside the
spawned task, and adjust types (make host/acp Send + 'static or wrap in
Arc/Mutex) as needed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7506a9a7-ddef-48ed-a495-e5954eb5a60b
📒 Files selected for processing (4)
cli/src/main.rscrates/pikachat-sidecar/src/acp.rscrates/pikachat-sidecar/src/daemon.rscrates/pikachat-sidecar/src/lib.rs
| let write_result = { | ||
| let mut stdin = self.stdin.lock().await; | ||
| stdin | ||
| .write_all(line.as_bytes()) | ||
| .await | ||
| .context("write ACP request")?; | ||
| stdin.write_all(b"\n").await.context("write ACP newline")?; | ||
| stdin.flush().await.context("flush ACP request") | ||
| }; |
There was a problem hiding this comment.
🟡 Early ? in request() bypasses pending entry cleanup on write failures
The ? operators on lines 202 and 203 inside the block expression cause early return from the request() function when write_all fails, bypassing the cleanup logic at lines 206-209 that removes the pending oneshot entry from self.pending. This leaks the oneshot::Sender in the pending HashMap. The intent was clearly to capture all write errors in write_result so that the cleanup path at line 207 handles them uniformly, but only the flush error on line 204 actually flows through that path.
How the leak accumulates
If the child process stdin pipe breaks, every subsequent request() call will:
- Insert a new oneshot sender into
self.pending(line 188) - Fail at
write_allwith?(line 202) - Return from
request()without removing the entry
The entries remain until fail_all_pending() is called by the reader task, but between the write failure and the reader detecting the child exit, entries accumulate.
| let write_result = { | |
| let mut stdin = self.stdin.lock().await; | |
| stdin | |
| .write_all(line.as_bytes()) | |
| .await | |
| .context("write ACP request")?; | |
| stdin.write_all(b"\n").await.context("write ACP newline")?; | |
| stdin.flush().await.context("flush ACP request") | |
| }; | |
| let write_result = async { | |
| let mut stdin = self.stdin.lock().await; | |
| stdin | |
| .write_all(line.as_bytes()) | |
| .await | |
| .context("write ACP request")?; | |
| stdin.write_all(b"\n").await.context("write ACP newline")?; | |
| stdin.flush().await.context("flush ACP request") | |
| }.await; | |
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
acpmodule in pikachat-sidecar for backend bridge implementationTest plan
🤖 Generated with Claude Code
Summary by CodeRabbit
--acp-execand--acp-cwdCLI flags to configure an external ACP agent backend for the daemon.