Skip to content

Fix cloudflared tunnel URL not detected (QUIC protocol bug)#825

Merged
pedramamini merged 2 commits intorcfrom
fix/cloudflared-quic-url-bug
Apr 16, 2026
Merged

Fix cloudflared tunnel URL not detected (QUIC protocol bug)#825
pedramamini merged 2 commits intorcfrom
fix/cloudflared-quic-url-bug

Conversation

@chr1syy
Copy link
Copy Markdown
Contributor

@chr1syy chr1syy commented Apr 13, 2026

Summary

  • cloudflared 2026.3.0 with default QUIC protocol hangs at "Requesting new quick Tunnel" and never outputs the tunnel URL to stderr, causing the Remote Control toggle to time out after 30s
  • Forces --protocol http2 which reliably prints the URL within seconds
  • Also listens on stdout in addition to stderr as a future-proofing fallback

Test plan

  • Existing 22 tunnel-manager tests still pass
  • Added test for --protocol http2 spawn arg
  • Added test for stdout URL extraction fallback
  • All 24 tests pass
  • Type check, prettier, eslint clean
  • Manual: Toggle Remote Control switch in Live overlay, verify tunnel URL appears

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Tests

    • Added tests covering tunnel startup behavior and URL detection from different output streams.
  • Improvements

    • Improved tunnel URL detection to monitor multiple output channels for greater reliability.
    • Updated tunnel protocol configuration for more robust connections.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 13, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b2802268-ed41-4482-8059-2ae2d3130239

📥 Commits

Reviewing files that changed from the base of the PR and between 0c55f47 and 709f003.

📒 Files selected for processing (1)
  • src/main/tunnel-manager.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/tunnel-manager.ts

📝 Walkthrough

Walkthrough

Added HTTP/2 to the cloudflared spawn arguments and refactored tunnel URL detection to use a shared output handler that listens on both stdout and stderr. Tests were extended to assert the new protocol arg and URL extraction from stdout.

Changes

Cohort / File(s) Summary
Tunnel Manager Implementation
src/main/tunnel-manager.ts
Added --protocol http2 to cloudflared spawn args; replaced stderr-only accumulation with a shared handleOutput that appends to an outputBuffer and scans both stdout and stderr for the tunnel URL, detaching listeners once detected.
Tunnel Manager Tests
src/__tests__/main/tunnel-manager.test.ts
Added tests asserting the spawn includes ['--protocol', 'http2'] and that start() resolves with the tunnel URL when emitted on stdout (in addition to existing stderr-based path); tests emit mocked data and exit events to complete flow.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I hopped to the spawn and found something new,
Two-streams are listening, and HTTP/2 too,
Stdout and stderr both sing the clue,
I sniffed out the URL and danced — hop, woohoo! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 72.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Fix cloudflared tunnel URL not detected (QUIC protocol bug)' directly addresses the main issue and solution in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/cloudflared-quic-url-bug

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 13, 2026

Greptile Summary

This PR fixes cloudflared 2026.3.0's QUIC protocol regression where the tunnel URL was never printed to stderr, causing a 30-second timeout. The fix forces --protocol http2 on spawn and adds a stdout fallback listener. Bundled alongside the tunnel fix are several merged feature/fix branches for the 0.16.0 RC: SSH false-positive error detection fixes, Windows process tree cleanup via taskkill, a vim VIMINIT ergonomics default, web UI parity features, and Cue pipeline additions.

Confidence Score: 5/5

Safe to merge — the tunnel fix is well-targeted and tested; all remaining findings are style/cleanup suggestions.

The stated bug fix (--protocol http2 + stdout fallback) is correct, well-tested with two new targeted tests, and doesn't regress existing coverage. Both comments are P2: a variable naming nit and a minor listener-cleanup improvement. Neither affects correctness. The bundled changes (SSH false-positive fixes, Windows taskkill, VIMINIT default) all look sound.

src/main/tunnel-manager.ts — minor variable naming and unbounded listener cleanup opportunity.

Important Files Changed

Filename Overview
src/main/tunnel-manager.ts Core fix: adds --protocol http2 to avoid QUIC URL output bug, unifies stderr/stdout via shared handler, and adds Windows taskkill support; minor variable naming and unbounded-listener concerns.
src/tests/main/tunnel-manager.test.ts Adds two targeted tests: one verifies --protocol http2 in spawn args, one verifies stdout URL extraction — both correct and cover the stated bug fix.
src/main/process-manager/handlers/StdoutHandler.ts Guards SSH error detection against parsed JSON lines, preventing false positives when agent output quotes shell commands like "command not found".
src/main/process-manager/handlers/ExitHandler.ts Scopes SSH error detection to stderr-only at exit, removing stdout from the combined check that caused false positives on structured JSONL agent output.
src/main/process-manager/utils/envBuilder.ts Injects `VIMINIT=set nocompatible

Sequence Diagram

sequenceDiagram
    participant UI as Renderer UI
    participant TM as TunnelManager
    participant CF as cloudflared process

    UI->>TM: start(port)
    TM->>CF: spawn(binary, ['tunnel','--url','http://localhost:port','--protocol','http2'])
    TM->>TM: set 30s timeout
    TM->>CF: listen stderr.on('data', handleOutput)
    TM->>CF: listen stdout.on('data', handleOutput)

    alt URL found on stderr (primary path)
        CF-->>TM: stderr data → trycloudflare.com URL
        TM->>TM: regex match → resolved=true, clearTimeout
        TM-->>UI: { success: true, url }
    else URL found on stdout (fallback)
        CF-->>TM: stdout data → trycloudflare.com URL
        TM->>TM: regex match → resolved=true, clearTimeout
        TM-->>UI: { success: true, url }
    else Timeout (30s, QUIC bug scenario)
        TM->>TM: resolved=true, stop()
        TM-->>UI: { success: false, error: 'Tunnel startup timed out (30s)' }
    end

    UI->>TM: stop()
    alt Windows
        TM->>TM: execFile('taskkill', ['/pid', pid, '/t', '/f'])
    else POSIX
        TM->>CF: SIGTERM → wait 3s → SIGKILL if needed
    end
    TM-->>UI: void
Loading

Reviews (1): Last reviewed commit: "Fix cloudflared tunnel not returning URL..." | Re-trigger Greptile

Comment thread src/main/tunnel-manager.ts Outdated
Comment thread src/main/tunnel-manager.ts
cloudflared 2026.3.0 with the default QUIC protocol hangs at
"Requesting new quick Tunnel" and never outputs the tunnel URL to
stderr, causing the Remote Control toggle to time out after 30s.

Forces --protocol http2 which reliably prints the URL within seconds.
Also listens on stdout in addition to stderr as a future-proofing
measure.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@chr1syy chr1syy force-pushed the fix/cloudflared-quic-url-bug branch from b480a0c to 0c55f47 Compare April 13, 2026 06:18
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/renderer/components/TerminalView.tsx (1)

227-240: ⚠️ Potential issue | 🟡 Minor

Add the remaining captured session fields to this callback's deps.

spawnPtyForTab() also closes over session.projectRoot, session.toolType, and session.customEnvVars. If any of those change, newly created tabs keep spawning with stale values until some listed dependency happens to change.

💡 Suggested change
 			[
 				session.id,
 				session.cwd,
 				session.remoteCwd,
+				session.projectRoot,
+				session.toolType,
+				session.customEnvVars,
 				session.sessionSshRemoteConfig,
 				session.sshRemoteId,
 				defaultShell,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/components/TerminalView.tsx` around lines 227 - 240, The
dependency array for the callback that creates terminals (used by
spawnPtyForTab) is missing session.projectRoot, session.toolType, and
session.customEnvVars, so spawnPtyForTab closes over stale values; update the
dependency array used where spawnPtyForTab is defined to include
session.projectRoot, session.toolType, and session.customEnvVars (in addition to
the existing session.id, session.cwd, session.remoteCwd,
session.sessionSshRemoteConfig, session.sshRemoteId, defaultShell, shellArgs,
shellEnvVars, closeTerminalTab, notifySpawnFailure) so the callback rebinds
whenever those session fields change.
src/web/hooks/useAutoRun.ts (1)

73-110: ⚠️ Potential issue | 🟠 Major

The stale-response guard is still unsafe across session switches.

latestDocRef only tracks filename, so sessionA/todo.md and sessionB/todo.md can still race: if A resolves last, it will overwrite B because the guard sees the same filename. loadDocuments() has the same issue with no guard at all, so a late get_auto_run_docs response can replace the current session’s documents and folderPath.

Use a request token (or at least sessionId + filename) for document-content loads, and add the same protection to loadDocuments().

Suggested direction
-import { useState, useCallback, useRef } from 'react';
+import { useState, useCallback, useRef } from 'react';
 ...
-const latestDocRef = useRef<string | null>(null);
+const latestDocsRequestRef = useRef(0);
+const latestDocRequestRef = useRef(0);

 const loadDocuments = useCallback(
 	async (sessionId: string) => {
+		const requestId = ++latestDocsRequestRef.current;
 		setIsLoadingDocs(true);
 		try {
 			const response = await sendRequest<{ documents?: AutoRunDocument[]; folderPath?: string }>(
 				'get_auto_run_docs',
 				{ sessionId }
 			);
+			if (requestId !== latestDocsRequestRef.current) return;
 			setDocuments(response.documents ?? []);
 			setFolderPath(response.folderPath ?? null);
 		} catch {
+			if (requestId !== latestDocsRequestRef.current) return;
 			setDocuments([]);
 			setFolderPath(null);
 		} finally {
 			setIsLoadingDocs(false);
 		}
 	},
 	[sendRequest]
 );

 const loadDocumentContent = useCallback(
 	async (sessionId: string, filename: string) => {
-		latestDocRef.current = filename;
+		const requestId = ++latestDocRequestRef.current;
 		try {
 			const response = await sendRequest<{ content?: string }>('get_auto_run_document', {
 				sessionId,
 				filename,
 			});
-			if (latestDocRef.current !== filename) return;
+			if (requestId !== latestDocRequestRef.current) return;
 			setSelectedDoc({
 				filename,
 				content: response.content ?? '',
 			});
 		} catch {
-			if (latestDocRef.current !== filename) return;
+			if (requestId !== latestDocRequestRef.current) return;
 			setSelectedDoc({ filename, content: '' });
 		}
 	},
 	[sendRequest]
 );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/hooks/useAutoRun.ts` around lines 73 - 110, loadDocumentContent's
stale-response guard only checks filename via latestDocRef, which allows
cross-session races (e.g., sessionA/todo.md vs sessionB/todo.md), and
loadDocuments has no guard at all; fix by introducing a request token (or
combine sessionId+filename) stored in latestDocRef (or a new latestRequestRef)
for loadDocumentContent and verify token after the await before calling
setSelectedDoc, and add a similar token for loadDocuments to verify the response
corresponds to the current sessionId before calling setDocuments and
setFolderPath; update references to latestDocRef, loadDocumentContent,
loadDocuments, setSelectedDoc, setDocuments, and setFolderPath accordingly.
src/__tests__/main/cue/cue-executor.test.ts (1)

118-124: ⚠️ Potential issue | 🟡 Minor

Update the mock's termination metadata when kill() is called.

exitCode and signalCode are declared on the mock but never updated by kill(). The production code checks these fields to decide whether to escalate to SIGKILL (see cue-process-lifecycle.ts line 131-133). Without mutating them in the mock, code paths that branch on these fields remain untested, and the test instead works around this by manually resetting the killed flag.

Suggested fix
  kill(signal?: string) {
    this.killed = true;
+   this.exitCode = null;
+   this.signalCode = signal ?? null;
    return true;
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/__tests__/main/cue/cue-executor.test.ts` around lines 118 - 124, Update
the test mock's kill() so it mutates the termination metadata instead of only
setting killed; specifically, inside kill(signal?: string) set this.killed =
true, set this.signalCode = signal ?? 'SIGTERM' and set this.exitCode = null (so
the process appears terminated by a signal), then return true; this ensures
exitCode and signalCode on the mock are updated and branches in the production
logic that inspect exitCode/signalCode are exercised.
src/main/web-server/WebServer.ts (1)

709-810: ⚠️ Potential issue | 🟠 Major

Wire triggerCueSubscription into the WebSocket callback map.

The callback interface and registry both support triggerCueSubscription, but setupMessageHandlerCallbacks() does not forward a handler for it. Web clients sending triggerCueSubscription messages will fail to reach the backend implementation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/web-server/WebServer.ts` around lines 709 - 810, The WebSocket
callback map in setupMessageHandlerCallbacks is missing the
triggerCueSubscription handler; add a mapping that forwards the incoming message
to callbackRegistry.triggerCueSubscription, e.g. include an entry like
triggerCueSubscription: async (subscriptionId: string) =>
this.callbackRegistry.triggerCueSubscription(subscriptionId) inside the object
passed to this.messageHandler.setCallbacks so web clients can invoke the backend
implementation.
🟠 Major comments (24)
src/renderer/hooks/keyboard/useMainKeyboardHandler.ts-100-153 (1)

100-153: ⚠️ Potential issue | 🟠 Major

Don't let terminal recovery swallow Windows/Linux Ctrl shortcuts.

This block runs before the normal shortcut dispatchers, but isExplicitAppShortcut only treats Meta/Alt as app shortcuts. On non-macOS, that means Ctrl-based shortcuts like Ctrl+J, Ctrl+K, Ctrl+T, and Ctrl+W get turned into PTY control bytes whenever xterm has lost focus, so the Maestro shortcut never fires. Please preserve Ctrl-based ctx.isShortcut(...) / ctx.isTabShortcut(...) matches before forwarding keys to the PTY, and add a regression test for at least one of those shortcuts in this path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/hooks/keyboard/useMainKeyboardHandler.ts` around lines 100 -
153, The terminal-recovery branch in useMainKeyboardHandler (the
isTerminalRecoveryContext block) currently treats only Meta/Alt as app shortcuts
via isExplicitAppShortcut, causing Ctrl-based app shortcuts to be forwarded to
the PTY; update that logic to also consult ctx.isShortcut(...) and
ctx.isTabShortcut(...) (or call ctx.isShortcut(e) / ctx.isTabShortcut(e) with
the current KeyboardEvent/context) and treat matches as explicit app shortcuts
so they are not converted to PTY control bytes; after changing the if that sets
isExplicitAppShortcut, add a regression test that simulates a Ctrl-based
shortcut (e.g., Ctrl+W or Ctrl+T) when a terminal tab is active but xterm is not
focused to assert the shortcut dispatcher runs instead of write(termSid, ...).
src/main/tunnel-manager.ts-124-143 (1)

124-143: ⚠️ Potential issue | 🟠 Major

Don't drop shutdown failures in the new kill paths.

Line 127 ignores every taskkill error, and Lines 139-143 swallow every SIGKILL failure. If termination fails, stop() still clears this.process, so the manager can think the tunnel is gone while cloudflared is still running and the next start() can spawn a second tunnel. Please special-case the expected “already exited” error and log/capture anything else instead of silently continuing. As per coding guidelines, src/**/*.{ts,tsx}: "Do not silently swallow errors. Let unhandled exceptions bubble up to Sentry for error tracking in production."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/tunnel-manager.ts` around lines 124 - 143, The stop() path currently
swallows errors from the taskkill execFile callback and from the SIGKILL catch,
which can hide real termination failures; update the execFile callback (the
taskkill call when isWindows() && proc.pid) to check the callback error argument
and only ignore the "process already exited" case but log and rethrow or surface
any other error so Sentry can capture it, and similarly in the timeout block
where proc.kill('SIGKILL') is wrapped, restrict the catch to ignore only the
expected "no such process" / "process already exited" error (e.g., ESRCH or
equivalent) and otherwise log/throw the error instead of silently continuing;
ensure this preserves clearing this.process only after successful termination or
after the error has been logged/surfaced.
src/renderer/components/MainPanel/MainPanel.tsx-258-301 (1)

258-301: ⚠️ Potential issue | 🟠 Major

Reset the previous agent’s pill state before starting the next fetch.

The stale flag prevents late promises from winning, but it does not clear the old models / efforts / defaults while the new agent’s requests are still in flight. MainPanelContent keeps receiving the previous agent’s options during that window, so a quick switch + click can persist a model/effort the new agent does not support.

Suggested fix
 useEffect(() => {
-	if (!activeSession?.toolType) return;
+	if (!activeSession?.toolType) {
+		setPillModels([]);
+		setPillEfforts([]);
+		setAgentDefaultModel('');
+		setAgentDefaultEffort('');
+		return;
+	}
+
+	setPillModels([]);
+	setPillEfforts([]);
+	setAgentDefaultModel('');
+	setAgentDefaultEffort('');
+
 	let stale = false;
 	const agentId = activeSession.toolType;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/components/MainPanel/MainPanel.tsx` around lines 258 - 301, When
activeSession.toolType changes, clear the previous agent's pill state
immediately before starting async fetches: call setPillModels([]),
setPillEfforts([]), setAgentDefaultModel(''), and setAgentDefaultEffort('') just
after reading const agentId = activeSession.toolType (and before invoking
window.maestro.agents.getModels/getConfigOptions/getConfig). Keep the existing
stale flag and cleanup return to ignore late promises, so the in-flight
responses will only update state if stale is false.
src/renderer/components/XTerminal.tsx-426-433 (1)

426-433: ⚠️ Potential issue | 🟠 Major

Ctrl+C interception can break terminal interrupts on Windows/Linux.

This block captures ctrl+c when text is selected, so SIGINT may not reach the PTY. Consider limiting this shortcut to metaKey (macOS Cmd+C) and leaving Ctrl+C for terminal semantics.

Proposed adjustment
-if (e.type === 'keydown' && (e.metaKey || e.ctrlKey) && !e.altKey && !e.shiftKey) {
+if (e.type === 'keydown' && e.metaKey && !e.altKey && !e.ctrlKey && !e.shiftKey) {
 	const key = e.key.toLowerCase();
 	if (key === 'c') {
 		const selection = term.getSelection();
 		if (selection) {
 			void safeClipboardWrite(selection);
 			return false;
 		}
 	}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/components/XTerminal.tsx` around lines 426 - 433, In XTerminal's
keydown handler (the block that checks e.type === 'keydown' and handles copy),
stop intercepting Ctrl+C on non-macOS so SIGINT can reach the PTY: modify the
condition in the handler to only treat the key combo as a copy shortcut when the
Meta key is used (or when running on macOS via platform check), and let Ctrl+C
fall through on Windows/Linux; update the branch that currently checks
(e.metaKey || e.ctrlKey) to require e.metaKey (or to check process.platform ===
'darwin' && e.metaKey) before calling safeClipboardWrite and returning false.
src/renderer/components/TerminalOutput.tsx-920-930 (1)

920-930: ⚠️ Potential issue | 🟠 Major

Forking from collapsed AI bubbles can target the wrong point in history.

In AI mode, one rendered bubble can represent multiple underlying response logs, but this handler always sends the collapsed bubble's log.id. Because the collapsed entry is built from the first response chunk, forking from a streamed/merged assistant reply can stop before the full visible message has actually been included.

Please pass the last underlying log id for the collapsed response (or store a dedicated fork target id when building the collapsed entry) instead of reusing the rendered bubble's base id.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/components/TerminalOutput.tsx` around lines 920 - 930, The fork
button currently calls onForkConversation(log.id) which uses the collapsed
bubble's base id and can point before the full streamed assistant reply; change
the collapse-entry construction to record the true fork target (e.g., set a
forkTargetId or lastChunkId on the collapsed log object when you merge/stream
assistant responses) and update the click handler to call
onForkConversation(log.forkTargetId || log.id) (or equivalent), ensuring the
fork target is the last underlying log id for collapsed AI responses.
src/main/cue/cue-process-lifecycle.ts-117-122 (1)

117-122: ⚠️ Potential issue | 🟠 Major

Unref the SIGKILL escalation timer.

The fallback timer created here is never cleared or unref()’d. On POSIX, every stop/timeout leaves a live 5s timer behind, so stopAllProcesses() can keep the main process alive during shutdown even when the child exits immediately.

Suggested fix
-		setTimeout(() => {
+		const sigkillTimer = setTimeout(() => {
 			if (child.exitCode === null && child.signalCode === null) {
 				child.kill('SIGKILL');
 			}
 		}, SIGKILL_DELAY_MS);
+		sigkillTimer.unref?.();
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/cue/cue-process-lifecycle.ts` around lines 117 - 122, The fallback
SIGKILL escalation timer created with setTimeout in the stop logic (referencing
child and SIGKILL_DELAY_MS) is left as a ref'd timer and can keep the process
alive; fix it by capturing the timer returned from setTimeout, call
timer.unref() immediately (or clear/unref when child exits) so the timeout won't
keep the Node process alive, and keep the existing check using child.exitCode
and child.signalCode before calling child.kill('SIGKILL').
src/web/components/DocumentGraph.tsx-58-109 (1)

58-109: ⚠️ Potential issue | 🟠 Major

Make document rows keyboard reachable.

These file-navigation rows are clickable divs, so keyboard users can't focus them or activate onFileSelect. Please render actionable rows as buttons, or add role="button", tabIndex={0}, and Enter/Space handlers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/components/DocumentGraph.tsx` around lines 58 - 109, The clickable
row rendered by renderItem currently uses a div so it's not keyboard-focusable
or activatable; update renderItem to make the row accessible by either rendering
it as a native button element or by adding accessibility attributes and keyboard
handlers: ensure the element wrapping the row (currently keyed by node.id) is
focusable (tabIndex={0} or a button), has role="button" if not a real button,
and implements keyDown handling for Enter/Space to call onFileSelect(node.path)
when !isCenter; preserve existing onClick, onMouseEnter/onMouseLeave visual
behavior and keep styling/disabled cursor when isCenter.
src/web/mobile/App.tsx-1448-1454 (1)

1448-1454: ⚠️ Potential issue | 🟠 Major

Keep worktree parents in the unread-only result set.

filteredSessions can return a matching worktree child without its parent. LeftPanel only renders top-level sessions, so unread-only can collapse to a blank list when the active/unread match is a child worktree. Include each matched child’s parent in this filtered array, or apply the unread filter after the worktree tree is built.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/App.tsx` around lines 1448 - 1454, filteredSessions currently
filters only matching sessions and can include a worktree child without its
top-level parent, causing LeftPanel (which renders only top-level sessions) to
appear empty; update the useMemo that defines filteredSessions so that when
showUnreadOnly is true you not only keep matches (s.id === activeSessionId ||
s.aiTabs?.some(tab => tab.hasUnread) || s.state === 'busy') but also include
each matched session's worktree parent(s) by looking up the parent session(s) in
the sessions array (e.g., via a parentId/worktreeParentId or parent relation on
the matched child) and adding those parents to the result set, deduplicating
before returning; reference the filteredSessions variable and the sessions,
showUnreadOnly, activeSessionId, aiTabs and LeftPanel rendering path when making
this change.
src/web/mobile/App.tsx-2028-2085 (1)

2028-2085: ⚠️ Potential issue | 🟠 Major

Don't swallow unexpected sendRequest failures here.

Both handlers catch transport/runtime failures and only log or set local UI state, so production diagnostics never reach Sentry. Keep the user-facing fallback, but also captureException(...) with request context for unexpected errors.

As per coding guidelines, src/**/*.{ts,tsx}: “Do not silently swallow errors. Let unhandled exceptions bubble up to Sentry for error tracking in production.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/App.tsx` around lines 2028 - 2085, The handlers
handleFileSelect and handleDocumentGraph currently catch sendRequest failures
but only log or set local state, which swallows runtime/transport errors; update
both catch blocks to call captureException(err, { extra: { action: 'read_file' |
'get_document_graph', sessionId: activeSessionId, filePath: filePath ||
previewFile?.path } }) (or similar contextual keys) in addition to the existing
webLogger.error / setDocumentGraph fallback so Sentry receives the exception and
diagnostics while preserving the user-facing behavior; ensure you import
captureException if not present and include the same contextual info used in the
request.
src/web/mobile/App.tsx-2138-2159 (1)

2138-2159: ⚠️ Potential issue | 🟠 Major

Only attach staged images to AI sends, and don't clear them on a failed send.

message.images is added regardless of currentMode, so images staged in AI mode can leak into the next terminal send. On top of that, sendResult is ignored and the staged attachments are cleared anyway, which loses data on a transient send failure.

Suggested guard in this block
 const message: Record<string, unknown> = {
 	type: 'send_command',
 	sessionId: activeSessionId,
 	command,
 	inputMode: currentMode,
 };

-// Include staged images if any are present
-if (stagedImages.length > 0) {
+// Include staged images only for AI sends
+if (currentMode === 'ai' && stagedImages.length > 0) {
 	message.images = stagedImages;
 }

 const sendResult = send(message);
 webLogger.info(
 	`[Web->Server] Command send result: ${sendResult}, command="${command.substring(0, 50)}" mode=${currentMode} session=${activeSessionId} images=${stagedImages.length}`,
 	'Mobile'
 );

-// Clear the input and staged images
-clearCommandDraft(currentMode);
-setStagedImages([]);
+if (sendResult) {
+	clearCommandDraft(currentMode);
+	setStagedImages([]);
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/App.tsx` around lines 2138 - 2159, The staged images are being
attached unconditionally and cleared regardless of send success; only attach
message.images when the currentMode supports AI/image sends (e.g., check
currentMode === 'ai' or use a helper like isImageMode(currentMode)) and include
stagedImages in the message only in that branch, then inspect sendResult and
call clearCommandDraft(currentMode) and setStagedImages([]) only when sendResult
indicates success (otherwise preserve stagedImages so transient failures don't
lose attachments). Ensure you update the block that builds the message and the
post-send cleanup around send(), referencing send(), stagedImages,
message.images, clearCommandDraft(), and setStagedImages().
src/web/mobile/RightDrawer.tsx-474-495 (1)

474-495: ⚠️ Potential issue | 🟠 Major

Normalize separators before matching git-status paths.

get_file_tree paths are absolute OS paths, while git status entries are repo-relative POSIX strings. On Windows, nodePath.endsWith('/' + relPath) never matches because the tree uses \, so change badges and folder indicators disappear.

💡 Suggested fix
 	const getFileChangeStatus = useCallback(
 		(nodePath: string): { label: string; color: string; bgColor: string } | null => {
 			if (gitFileMap.size === 0) return null;
+			const normalizedNodePath = nodePath.replaceAll('\\', '/');
 			// Git status paths are relative; tree paths are absolute.
 			// Match if the node path ends with the git relative path.
 			for (const [relPath, status] of gitFileMap) {
-				if (nodePath.endsWith('/' + relPath) || nodePath === relPath) {
+				const normalizedRelPath = relPath.replaceAll('\\', '/');
+				if (
+					normalizedNodePath.endsWith('/' + normalizedRelPath) ||
+					normalizedNodePath === normalizedRelPath
+				) {
 					const code = status.trim().charAt(0);
 					if (code === 'A' || code === '?') {
 						return { label: 'A', color: '#22c55e', bgColor: '#22c55e20' };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/RightDrawer.tsx` around lines 474 - 495, getFileChangeStatus
fails on Windows because nodePath uses OS separators while gitFileMap keys are
POSIX; normalize separators before matching by converting nodePath and relPath
to a common format (e.g., replace backslashes with forward slashes or use a
posix path normalization) inside getFileChangeStatus so the endsWith('/' +
relPath) or equality checks work cross-platform; update references to nodePath
and relPath used for matching in the function to their normalized variants (keep
the rest of the logic unchanged).
src/web/mobile/RightDrawer.tsx-1164-1176 (1)

1164-1176: ⚠️ Potential issue | 🟠 Major

Preview load failures are being masked as empty documents.

The catch branch clears the content and suppresses the exception, so a transport/server error is indistinguishable from a real empty file and nothing reaches Sentry. Keep an explicit preview error state for expected request failures, and rethrow or capture unexpected ones.

As per coding guidelines "Do not silently swallow errors. Let unhandled exceptions bubble up to Sentry for error tracking in production. Handle expected/recoverable errors explicitly (e.g., NETWORK_ERROR). For unexpected errors, re-throw them to allow Sentry to capture them."

src/main/web-server/web-server-factory.ts-1992-2009 (1)

1992-2009: ⚠️ Potential issue | 🟠 Major

Large images still bypass the 1 MB guard.

The size check happens after the image branch, so PNG/JPG/SVG requests read the whole file and return a base64 data URI even when they exceed the advertised 1 MB cap. Move the stat() guard before isImage so binary previews obey the same limit.

💡 Suggested fix
-				if (isImage) {
-					const buffer = await fs.readFile(filePath);
-					const base64 = buffer.toString('base64');
-					const mimeType = ext === 'svg' ? 'image/svg+xml' : `image/${ext}`;
-					return {
-						content: `data:${mimeType};base64,${base64}`,
-						language: 'image',
-					};
-				}
-
 				// Check file size before reading
 				const stats = await fs.stat(filePath);
 				if (stats.size > 1024 * 1024) {
 					return {
 						content: '',
@@
 						error: `File too large (${Math.round(stats.size / 1024)}KB). Maximum supported size is 1MB.`,
 					};
 				}
+
+				if (isImage) {
+					const buffer = await fs.readFile(filePath);
+					const base64 = buffer.toString('base64');
+					const mimeType = ext === 'svg' ? 'image/svg+xml' : `image/${ext}`;
+					return {
+						content: `data:${mimeType};base64,${base64}`,
+						language: 'image',
+					};
+				}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/web-server/web-server-factory.ts` around lines 1992 - 2009, The
image preview branch (isImage) reads the file before the 1MB guard, allowing
large images to bypass the size limit; move the file-size check
(fs.stat(filePath) and the size > 1024*1024 condition) to run before the isImage
branch (and reuse the stats result to avoid a second stat), and if the file is
too large return the same error payload (content: '', language:
EXTENSION_TO_LANGUAGE[ext] || 'text', error: ...) instead of reading and
encoding the image in the isImage branch; preserve the existing mimeType
handling (ext === 'svg' ? 'image/svg+xml' : `image/${ext}`) for the valid-size
image path.
src/web/components/FilePreview.tsx-51-71 (1)

51-71: ⚠️ Potential issue | 🟠 Major

Don't turn markdown fragments into file-open requests.

#heading and docs/page.md#section currently resolve through onFileSelect, so in-page anchors break and hash-suffixed file links become file-not-found paths. Ignore fragment-only / non-file URLs and strip #... before building the file target.

💡 Suggested fix
 		const resolveRelativePath = useCallback(
 			(href: string) => {
-				if (!href || href.startsWith('http://') || href.startsWith('https://')) return null;
+				if (!href || href.startsWith('#')) return null;
+				if (/^[a-z][a-z0-9+.-]*:/i.test(href) || href.startsWith('//')) return null;
 				// Strip wiki-link brackets
 				const cleanHref = href.replace(/^\[\[|\]\]$/g, '');
+				const [pathOnly] = cleanHref.split(/[?#]/, 1);
+				if (!pathOnly) return null;
 				const dir = file.path.substring(0, file.path.lastIndexOf('/'));
 				// Simple relative resolution
-				if (cleanHref.startsWith('/')) return cleanHref;
-				return `${dir}/${cleanHref}`;
+				if (pathOnly.startsWith('/')) return pathOnly;
+				return `${dir}/${pathOnly}`;
 			},
 			[file.path]
 		);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/components/FilePreview.tsx` around lines 51 - 71, The link handling
currently treats fragment-only and hash-suffixed links as file paths; update
resolveRelativePath and handleLinkClick to first ignore fragment-only hrefs (if
href.startsWith('#') return null) and to strip any trailing fragment (remove
/#.*$/) before processing; ensure after stripping the cleanHref is non-empty and
still checked for external URLs, then resolve the path as before and only call
onFileSelect for true file targets so in-page anchors and hash-suffixed links
(e.g., docs/page.md#section) no longer become file-open requests.
src/web/mobile/RightDrawer.tsx-321-338 (1)

321-338: ⚠️ Potential issue | 🟠 Major

Outside taps won't dismiss this menu on touch devices.

This effect only watches mousedown, but the menu is opened via long-press in a mobile view. On touch screens, tapping outside can leave the menu stuck open until an action is chosen.

💡 Suggested fix
-		document.addEventListener('mousedown', handleClick);
+		document.addEventListener('pointerdown', handleClick);
 		document.addEventListener('keydown', handleKey);
 		return () => {
-			document.removeEventListener('mousedown', handleClick);
+			document.removeEventListener('pointerdown', handleClick);
 			document.removeEventListener('keydown', handleKey);
 		};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/RightDrawer.tsx` around lines 321 - 338, The effect that
closes the context menu only listens for 'mousedown' which misses touch taps;
update the useEffect (the effect that references contextMenu, contextMenuRef,
setContextMenu) to also listen for touch/pointer events (e.g., 'touchstart' or
'pointerdown') and remove those listeners in the cleanup so outside taps on
mobile dismiss the menu; ensure the same handler used for 'mousedown'
(handleClick) is attached for the added event(s) and that the effect's
dependencies remain correct.
src/web/mobile/AllSessionsView.tsx-1224-1229 (1)

1224-1229: ⚠️ Potential issue | 🟠 Major

Unread-only mode can still open with every group collapsed.

When this view mounts with showUnreadOnly=true, the effect runs once while collapsedGroups is still null, then never reruns after the initialization effect fills it. The unread filter can therefore hide all matching sessions until the user expands groups manually.

💡 Suggested fix
 	useEffect(() => {
-		if (showUnreadOnly && collapsedGroups) {
-			setCollapsedGroups(new Set());
-		}
-	}, [showUnreadOnly]);
+		if (!showUnreadOnly || collapsedGroups === null || collapsedGroups.size === 0) {
+			return;
+		}
+		setCollapsedGroups(new Set());
+	}, [showUnreadOnly, collapsedGroups]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/AllSessionsView.tsx` around lines 1224 - 1229, The effect that
auto-expands groups when showUnreadOnly is true runs before collapsedGroups is
initialized, so it never clears collapses; update the useEffect in
AllSessionsView.tsx that references showUnreadOnly/collapsedGroups to depend on
both showUnreadOnly and collapsedGroups and only call setCollapsedGroups(new
Set()) when showUnreadOnly is true and collapsedGroups is non-null and non-empty
(e.g., collapsedGroups && collapsedGroups.size > 0) to ensure it runs after
initialization and avoids redundant updates.
src/renderer/App.tsx-2322-2325 (1)

2322-2325: ⚠️ Potential issue | 🟠 Major

The fallback worktree path is POSIX-only.

Line 2324 strips the parent directory with /\/[^/]+$/, so a Windows cwd like C:\repo\app is unchanged and Line 2325 produces a mixed-separator path such as C:\repo\app/worktrees/<branch>. Build this fallback with platform-aware path handling instead of regex/string concatenation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/App.tsx` around lines 2322 - 2325, The fallback worktreePath
construction uses a POSIX-only regex and string concat
(parentSession.cwd.replace(/\/[^/]+$/, '') + '/worktrees' and then
`${basePath}/${branchName}`) which produces mixed separators on Windows; replace
this with platform-aware path operations: use Node's path methods (e.g.,
path.dirname and path.join) to compute basePath from parentSession.cwd when
parentSession.worktreeConfig?.basePath is missing, then join with 'worktrees'
and branchName to form worktreePath so separators are correct across platforms.
src/main/web-server/handlers/messageHandlers.ts-2759-2836 (1)

2759-2836: ⚠️ Potential issue | 🟠 Major

Graph generation is unbounded per request.

Lines 2759-2836 rescan the repo and reread every markdown file on each get_document_graph call to discover backlinks. On larger projects, a single UI action turns into hundreds or thousands of disk reads on the WebSocket handler path. Cache/index this per session, or enforce hard file/byte limits before building the graph.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/web-server/handlers/messageHandlers.ts` around lines 2759 - 2836,
The graph generation rescans and rereads every markdown file on each request
(using scanMarkdownFiles, parseLinks, resolveLink), causing unbounded disk
reads; fix by introducing a session-scoped or instance cache (e.g.,
this.markdownIndex or this.scanMarkdownCache) that stores the list of markdown
paths and optionally file contents and timestamps keyed by rootPath, and update
the code in the get_document_graph handler to read from the cache instead of
calling scanMarkdownFiles on every request (fall back to rescanning only if
cache miss or stale), and enforce hard limits (MAX_FILES, MAX_BYTES) before
building the graph so you bail out or truncate processing when allFiles.length
or total bytes exceeds limits. Ensure you update reads inside the backlink loop
to use cached content when available and add cache invalidation strategy (mtime
or TTL) for correctness.
src/renderer/App.tsx-2167-2192 (1)

2167-2192: ⚠️ Potential issue | 🟠 Major

Don't delete the session after opaque kill failures.

Lines 2167-2182 suppress every window.maestro.process.kill(...) rejection, then Lines 2186-2192 remove the session anyway. If one of those kills fails for anything other than “already exited”, the UI forgets the session while its agent/PTY keeps running in the background. Only ignore the expected not-found case; surface unexpected failures before removing the session.
As per coding guidelines "Do not silently swallow errors. Let unhandled exceptions bubble up to Sentry for error tracking in production." and "Use Sentry utilities (captureException, captureMessage) from src/utils/sentry.ts for explicit error reporting with context."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/App.tsx` around lines 2167 - 2192, When deleting a session,
don't silently swallow all errors from window.maestro.process.kill; instead
catch each rejection, check whether it represents the expected "not
found/already exited" condition and ignore only that case, and for any other
error call the Sentry utilities (captureException or captureMessage from
src/utils/sentry.ts) with context (sessionId, tab id, which kill failed) and
abort removing the session; update the block around
window.maestro.process.kill(...) calls (including the loop over
session.terminalTabs) to surface unexpected failures so setSessions(...) only
runs when all non-ignored kills succeeded (or propagate the error) and preserve
useSessionStore.getState().activeSessionId/setActiveSessionId behavior.
src/main/web-server/handlers/messageHandlers.ts-2576-2843 (1)

2576-2843: ⚠️ Potential issue | 🟠 Major

get_document_graph bypasses the remote file layer.

Lines 2596-2607 and the helpers below walk session.cwd with local fs.readdir/fs.readFile. That works for local sessions, but SSH-backed sessions keep their files on the remote host, so this route will fail or return empty graphs there while read_file can still succeed via its callback. This needs a remote-aware callback path or a clear unsupported response for remote sessions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/web-server/handlers/messageHandlers.ts` around lines 2576 - 2843,
handleGetDocumentGraph currently uses local fs calls (via scanMarkdownFiles and
buildDocumentGraph) which break for SSH-backed sessions; update the flow to
detect when a session is remote (using the session object from
this.callbacks.getSessions()) and either use the remote-aware callbacks (e.g.,
this.callbacks.readFile and this.callbacks.readDir / listFiles — or whatever
remote file API exists on callbacks) inside scanMarkdownFiles and
buildDocumentGraph to enumerate and read files, or return a clear unsupported
error for remote sessions if those callbacks are not available; ensure you
replace fs.readdir/fs.readFile usages in scanMarkdownFiles and
buildDocumentGraph with the callback-based equivalents and keep the same return
shapes (nodes/edges/centerNodeId) so handleGetDocumentGraph can send the same
response.
src/main/preload/process.ts-552-557 (1)

552-557: ⚠️ Potential issue | 🟠 Major

folderPath still won't reach the web client.

This new third IPC argument is not wired through by the current responders: src/renderer/App.tsx:1972-1985 and src/renderer/hooks/remote/useAppRemoteEventListeners.ts:244-275 still call sendRemoteGetAutoRunDocsResponse(responseChannel, docs) with only two arguments. As shipped, the new field will stay undefined end-to-end.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/preload/process.ts` around lines 552 - 557, The new third argument
folderPath on sendRemoteGetAutoRunDocsResponse isn't propagated to the renderer
because callers still invoke sendRemoteGetAutoRunDocsResponse(responseChannel,
docs) without the third param; update the callers in the renderer responders
(the functions in App.tsx around the remote response handling and in
useAppRemoteEventListeners.ts where sendRemoteGetAutoRunDocsResponse is invoked)
to pass the folderPath value through, and confirm any IPC handlers that receive
the response (the ipcRenderer listeners and their upstream handlers) accept and
forward the third argument so ipcRenderer.send(responseChannel, documents,
folderPath) actually reaches the web client end-to-end.
src/main/preload/process.ts-805-815 (1)

805-815: ⚠️ Potential issue | 🟠 Major

Reply on worktree callback failures instead of timing out.

If the renderer handler throws or returns a rejected promise, this listener never sends responseChannel. The web caller in src/main/web-server/web-server-factory.ts:1871-1910 then sits until its 30s timeout instead of getting an immediate error.

Suggested fix
 		onRemoteCreateWorktree: (
 			callback: (parentSessionId: string, branchName: string, responseChannel: string) => void
 		): (() => void) => {
 			const handler = (
 				_: unknown,
 				parentSessionId: string,
 				branchName: string,
 				responseChannel: string
-			) => callback(parentSessionId, branchName, responseChannel);
+			) => {
+				try {
+					Promise.resolve(
+						callback(parentSessionId, branchName, responseChannel)
+					).catch((error) => {
+						ipcRenderer.invoke(
+							'logger:log',
+							'error',
+							'Error invoking remote create worktree callback',
+							'Preload',
+							{
+								error: error instanceof Error ? error.message : String(error),
+								parentSessionId,
+								branchName,
+							}
+						);
+						ipcRenderer.send(responseChannel, {
+							success: false,
+							error: error instanceof Error ? error.message : String(error),
+						});
+					});
+				} catch (error) {
+					ipcRenderer.send(responseChannel, {
+						success: false,
+						error: error instanceof Error ? error.message : String(error),
+					});
+				}
+			};
 			ipcRenderer.on('remote:createWorktree', handler);
 			return () => ipcRenderer.removeListener('remote:createWorktree', handler);
 		},

Also applies to: 821-825

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/preload/process.ts` around lines 805 - 815, The
onRemoteCreateWorktree listener currently calls the renderer callback directly
so any thrown error or rejected promise prevents sending the responseChannel;
update the handler in onRemoteCreateWorktree (and the similar listener at lines
~821-825) to wrap callback(parentSessionId, branchName, responseChannel) in a
try/catch and handle async rejections (await if it returns a Promise). On
success, send the normal reply via ipcRenderer.send(responseChannel, { result:
... }), and on error send ipcRenderer.send(responseChannel, { error:
serializeError(err) }) so the web caller receives an immediate error instead of
timing out; reference the handler function,
ipcRenderer.on('remote:createWorktree', handler), and the responseChannel
parameter when implementing this change.
src/main/preload/process.ts-1127-1150 (1)

1127-1150: ⚠️ Potential issue | 🟠 Major

Don't hide unexpected Cue trigger failures behind bare false.

Both the sync and async failure paths drop the exception details, so production breakage in this new trigger flow becomes hard to diagnose. Please log/capture the error before sending the fallback response.

Suggested fix
 			const handler = (
 				_: unknown,
 				subscriptionName: string,
 				prompt: string | undefined,
 				responseChannel: string
 			) => {
 				try {
-					Promise.resolve(callback(subscriptionName, prompt, responseChannel)).catch(() => {
+					Promise.resolve(callback(subscriptionName, prompt, responseChannel)).catch((error) => {
+						ipcRenderer.invoke(
+							'logger:log',
+							'error',
+							'Error invoking remote trigger Cue subscription callback',
+							'Preload',
+							{
+								error: error instanceof Error ? error.message : String(error),
+								subscriptionName,
+							}
+						);
 						ipcRenderer.send(responseChannel, false);
 					});
-				} catch {
+				} catch (error) {
+					ipcRenderer.invoke(
+						'logger:log',
+						'error',
+						'Error invoking remote trigger Cue subscription callback',
+						'Preload',
+						{
+							error: error instanceof Error ? error.message : String(error),
+							subscriptionName,
+						}
+					);
 					ipcRenderer.send(responseChannel, false);
 				}
 			};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/preload/process.ts` around lines 1127 - 1150, The handler in
onRemoteTriggerCueSubscription currently swallows exceptions and always sends
false via ipcRenderer.send(responseChannel, false) without any diagnostic;
update the try/catch and the Promise.catch to capture the thrown error (e) and
log or record it before sending the fallback response—e.g., in handler inside
onRemoteTriggerCueSubscription, catch errors from both the synchronous callback
invocation and the Promise returned by callback, call console.error or an
existing logger with context including subscriptionName and responseChannel and
the error, then send the false fallback via ipcRenderer.send(responseChannel,
false).
src/web/mobile/WebTerminal.tsx-543-553 (1)

543-553: ⚠️ Potential issue | 🟠 Major

Add a guard check before accessing the Clipboard API.

At line 550, navigator.clipboard.writeText() is called without checking if the Clipboard API exists. In contexts where the API is unavailable (older browsers, non-secure contexts), this throws before .catch() can handle it, resulting in a silent failure. The call also lacks any error logging.

Use the guard pattern from SessionStatusBanner.tsx to check availability first:

Proposed fix
 			if (e.type === 'keydown' && (e.metaKey || e.ctrlKey) && !e.altKey && !e.shiftKey) {
 				const key = e.key.toLowerCase();
 				if (key === 'c') {
 					const selection = term.getSelection();
-					if (selection) {
+					if (selection && navigator.clipboard && navigator.clipboard.writeText) {
 						void navigator.clipboard.writeText(selection).catch(() => {
 							// Clipboard may be unavailable without focus/permission.
 						});
 						return false;
 					}
 				}
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/mobile/WebTerminal.tsx` around lines 543 - 553, The clipboard write
call in the keydown handler inside WebTerminal (the block that checks e.type/key
and uses term.getSelection()) must guard that the Clipboard API exists before
calling navigator.clipboard.writeText; update the code around
term.getSelection() so you only call navigator.clipboard.writeText when
navigator.clipboard and navigator.clipboard.writeText are defined, and add a
minimal error log in the catch to surface failures (mirror the guard pattern
used in SessionStatusBanner.tsx); keep the existing early return false when
successful.
🟡 Minor comments (7)
src/main/process-manager/utils/envBuilder.ts-87-88 (1)

87-88: ⚠️ Potential issue | 🟡 Minor

Preserve explicitly empty VIMINIT values instead of forcing the default.

Line 87 and Line 88 use a falsy check (!env.VIMINIT) plus ||, so an explicit VIMINIT='' is treated as missing and gets overwritten.

💡 Proposed fix
-	if (!env.VIMINIT) {
-		env.VIMINIT = process.env.VIMINIT || 'set nocompatible | set esckeys';
-	}
+	if (env.VIMINIT === undefined) {
+		env.VIMINIT = process.env.VIMINIT ?? 'set nocompatible | set esckeys';
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/process-manager/utils/envBuilder.ts` around lines 87 - 88, The
current falsy check on env.VIMINIT overwrites an explicitly empty string; change
the logic to only apply the default when env.VIMINIT is undefined (or null)
rather than falsy — e.g., check env.VIMINIT === undefined (or use nullish
coalescing) and then set env.VIMINIT from process.env.VIMINIT or the default
'set nocompatible | set esckeys'; update the assignment around env.VIMINIT and
process.env.VIMINIT to preserve an intentional empty value.
src/renderer/components/TerminalView.tsx-89-99 (1)

89-99: ⚠️ Potential issue | 🟡 Minor

Don't infer SSH failures from the message text.

The batching logic keys SSH precedence off message.startsWith('SSH '), but the .catch() path still forwards raw err.message. Rejected SSH launches like host-resolution/auth errors won't be treated as SSH failures and can be overwritten by a later local PTY error in the same 200ms batch window. Pass an explicit isSsh flag into notifySpawnFailure() instead of parsing the string.

💡 Suggested change
-		const spawnFailureLastMessageRef = useRef<string | null>(null);
+		const spawnFailureLastMessageRef = useRef<string | null>(null);
+		const spawnFailureWasSshRef = useRef(false);
...
-		const notifySpawnFailure = useCallback((message: string) => {
+		const notifySpawnFailure = useCallback((message: string, isSsh = false) => {
 			spawnFailureCountRef.current++;
-			if (
-				!spawnFailureLastMessageRef.current ||
-				message.startsWith('SSH ') ||
-				!spawnFailureLastMessageRef.current.startsWith('SSH ')
-			) {
+			if (
+				!spawnFailureLastMessageRef.current ||
+				isSsh ||
+				!spawnFailureWasSshRef.current
+			) {
 				spawnFailureLastMessageRef.current = message;
+				spawnFailureWasSshRef.current = isSsh;
 			}
...
 				const count = spawnFailureCountRef.current;
 				const lastMessage = spawnFailureLastMessageRef.current ?? message;
 				spawnFailureCountRef.current = 0;
 				spawnFailureLastMessageRef.current = null;
+				spawnFailureWasSshRef.current = false;
 				spawnFailureTimerRef.current = null;
...
-							notifySpawnFailure(
+							notifySpawnFailure(
 								effectiveSshConfig?.enabled
 									? 'SSH terminal could not be started. Check that the SSH remote is enabled and reachable.'
-									: 'The shell process could not be started. Check system PTY availability.'
+									: 'The shell process could not be started. Check system PTY availability.',
+								!!effectiveSshConfig?.enabled
 							);
...
-						notifySpawnFailure(
-							err instanceof Error ? err.message : 'An unexpected error occurred.'
-						);
+						notifySpawnFailure(
+							err instanceof Error ? err.message : 'An unexpected error occurred.',
+							!!effectiveSshConfig?.enabled
+						);

Also applies to: 203-205, 219-221

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/components/TerminalView.tsx` around lines 89 - 99,
notifySpawnFailure currently infers SSH failures by checking
message.startsWith('SSH ') which is brittle; change the signature of
notifySpawnFailure to accept an explicit isSsh boolean (e.g.,
notifySpawnFailure(message: string, isSsh: boolean)) and update its batching
logic to use isSsh instead of parsing the message when deciding SSH precedence
for spawnFailureLastMessageRef/current count. Then update all callers (including
the catch paths that forward err.message and the other usages noted around the
later call sites) to pass true for known SSH-related errors and false for local
PTY errors so SSH failures cannot be overwritten by non-SSH messages during the
batch window. Ensure spawnFailureLastMessageRef and spawnFailureCountRef
behavior remains unchanged apart from using the new flag.
src/renderer/hooks/remote/useRemoteHandlers.ts-378-395 (1)

378-395: ⚠️ Potential issue | 🟡 Minor

Report appendSystemPrompt enrichment failures instead of ignoring them.

Both new catch {} blocks drop unexpected gitService.getStatus() / history.getFilePath() failures completely. If these lookups are meant to be best-effort, please still captureException(...) and continue so prompt-context regressions stay visible.

Suggested fix
 					if (session.isGitRepo) {
 						try {
 							const status = await gitService.getStatus(session.cwd);
 							gitBranch = status.branch;
-						} catch {
-							// Ignore git errors
+						} catch (error) {
+							captureException(error, {
+								extra: {
+									cwd: session.cwd,
+									sessionId: session.id,
+									sessionName: session.name,
+									operation: 'git-status-for-remote-system-prompt',
+								},
+							});
 						}
 					}
@@
 					if (!isSshSession) {
 						try {
 							historyFilePath = (await window.maestro.history.getFilePath(session.id)) || undefined;
-						} catch {
-							// Ignore history errors
+						} catch (error) {
+							captureException(error, {
+								extra: {
+									sessionId: session.id,
+									sessionName: session.name,
+									operation: 'history-file-path-for-remote-system-prompt',
+								},
+							});
 						}
 					}

As per coding guidelines "Do not silently swallow errors. Let unhandled exceptions bubble up to Sentry for error tracking in production." and "Use Sentry utilities (captureException, captureMessage) from src/utils/sentry.ts for explicit error reporting with context."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/hooks/remote/useRemoteHandlers.ts` around lines 378 - 395, The
two empty catch blocks for gitService.getStatus(session.cwd) and
window.maestro.history.getFilePath(session.id) should report exceptions to
Sentry instead of silently swallowing them: import and call the project's Sentry
helper (captureException from src/utils/sentry.ts) inside each catch with the
caught error and contextual metadata (e.g., session.id, session.cwd,
session.isGitRepo, isSshSession, and which operation: "git.getStatus" or
"history.getFilePath"), then continue to treat these as best-effort lookups so
gitBranch and historyFilePath behavior remains unchanged.
src/main/cue/triggers/cue-trigger-source-registry.ts-47-48 (1)

47-48: ⚠️ Potential issue | 🟡 Minor

Fix mojibake in comment text.

The ��� sequence looks like an encoding artifact and should be replaced with a normal dash for maintainability.

Proposed fix
-			// These are not timer/watcher-driven ��� the runtime handles them
+			// These are not timer/watcher-driven — the runtime handles them
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/cue/triggers/cue-trigger-source-registry.ts` around lines 47 - 48,
Replace the mojibake "���" in the inline comment inside
cue-trigger-source-registry (near the top comment above the registry logic) with
a normal dash or em dash so the comment reads: "These are not
timer/watcher-driven — the runtime handles them directly via the completion
service / startup loop / CLI command." Locate the comment in
cue-trigger-source-registry.ts (the block describing non-timer/watcher triggers)
and update the characters to a standard ASCII or Unicode dash to remove the
encoding artifact.
src/main/cue/cue-engine.ts-361-377 (1)

361-377: ⚠️ Potential issue | 🟡 Minor

Normalize empty prompt overrides before dispatch.

promptOverride is treated as “absent” here via a truthy check, but Line 377 still forwards it to dispatchSubscription(), where the dispatch layer uses nullish semantics. That makes '' suppress the configured prompt while omitting cliPrompt from the synthetic event and log metadata. Normalize once and reuse the normalized value for the event, log line, and dispatch call.

Suggested fix
 	triggerSubscription(subscriptionName: string, promptOverride?: string): boolean {
+		const normalizedPromptOverride = promptOverride === '' ? undefined : promptOverride;
+
 		for (const [sessionId, state] of this.registry.snapshot()) {
 			for (const sub of state.config.subscriptions) {
 				if (sub.name !== subscriptionName) continue;
 				if (sub.agent_id && sub.agent_id !== sessionId) continue;
 
 				const event = createCueEvent(sub.event, sub.name, {
 					manual: true,
-					...(promptOverride ? { cliPrompt: promptOverride } : {}),
+					...(normalizedPromptOverride !== undefined
+						? { cliPrompt: normalizedPromptOverride }
+						: {}),
 				});
 
 				this.deps.onLog(
 					'cue',
-					`[CUE] "${sub.name}" manually triggered${promptOverride ? ' (with prompt override)' : ''}`
+					`[CUE] "${sub.name}" manually triggered${normalizedPromptOverride !== undefined ? ' (with prompt override)' : ''}`
 				);
 				state.lastTriggered = event.timestamp;
 				this.dispatchService.dispatchSubscription(
 					sessionId,
 					sub,
 					event,
 					'manual',
 					undefined,
-					promptOverride
+					normalizedPromptOverride
 				);
 				return true;
 			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/cue/cue-engine.ts` around lines 361 - 377, The code uses
promptOverride with a truthy check but still forwards raw promptOverride to
createCueEvent, deps.onLog, and dispatchSubscription, causing an empty string to
incorrectly suppress configured prompts; normalize promptOverride once (e.g.
convert empty/whitespace-only string to undefined) into a single variable (e.g.
normalizedPrompt) and then use that variable when calling
createCueEvent(sub.event, sub.name, ...), when building the log message for
this.deps.onLog('cue', ...), and when passing the prompt argument to
this.dispatchService.dispatchSubscription(sessionId, sub, event, 'manual',
undefined, normalizedPrompt) so all three consumers see the same normalized
absence/presence semantics.
src/web/components/FilePreview.tsx-36-49 (1)

36-49: ⚠️ Potential issue | 🟡 Minor

Reset truncation state when the previewed file changes.

After one large file sets showFullContent to true, every later large file renders fully because that state survives prop changes. Each file should start truncated again.

💡 Suggested fix
-import React, { useMemo, useState, useCallback } from 'react';
+import React, { useEffect, useMemo, useState, useCallback } from 'react';
@@
 		const { theme, isLight } = useTheme();
 		const colors = theme.colors;
 		const [showFullContent, setShowFullContent] = useState(false);
+
+		useEffect(() => {
+			setShowFullContent(false);
+		}, [file.path]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/web/components/FilePreview.tsx` around lines 36 - 49, FilePreview
currently preserves the showFullContent state across different files causing
subsequent large files to render fully; add an effect in the FilePreview
component that resets showFullContent to false when the previewed file changes
(e.g., watch a stable identifier like file.path or file.content) by calling
setShowFullContent(false) in a useEffect with that file dependency so each new
file starts truncated; reference the showFullContent/setShowFullContent state
pair and the displayContent useMemo to place the effect near those existing
hooks.
src/main/web-server/handlers/messageHandlers.ts-496-517 (1)

496-517: ⚠️ Potential issue | 🟡 Minor

The 10MB image limit is measured on the wrong unit.

Line 511 compares img.length, but img is a base64 Data URL string, not decoded image bytes. Because base64 inflates payload size, this rejects files well below 10MB while reporting a 10MB cap. Strip the prefix and validate the decoded byte length instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/web-server/handlers/messageHandlers.ts` around lines 496 - 517, The
image-size check in the message handler uses the base64 DataURL string length
(in messageHandlers.ts, inside the images validation block around
validatedImages/MAX_IMAGE_SIZE) which inflates size; instead strip the DataURL
prefix (verify it contains ";base64,"), extract the base64 payload, decode it
and measure the decoded byte length (e.g., Buffer.from(base64Payload,
'base64').length) and compare that to MAX_IMAGE_SIZE; update the error path (the
existing Image too large message) to trigger only when the decoded byte length
exceeds the limit and keep the other format checks (startsWith('data:image/')
and max count) intact.

Comment thread src/renderer/App.tsx Outdated
- Rename stderrBuffer → outputBuffer since it now captures both streams
- Detach stderr/stdout data handlers once URL is extracted to avoid
  unbounded buffer growth and regex re-evaluation on long-lived tunnels

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@chr1syy chr1syy added the ready to merge This PR is ready to merge label Apr 13, 2026
@pedramamini pedramamini merged commit d8b8d72 into rc Apr 16, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready to merge This PR is ready to merge

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants