fix: --from-prompt timeout handling with actionable error output#4
Merged
1a35e1 merged 2 commits into1a35e1:mainfrom Feb 23, 2026
Merged
Conversation
…rrors
The root problem: callOpenAI() and callAnthropic() in src/lib/ai.ts call
fetch() with no timeout. The OpenAI path uses the web_search_preview tool
which can take 30-60 s even on a healthy connection; any network hiccup or
provider slowdown causes the spinner to hang indefinitely.
Changes:
src/lib/ai.ts
- Added fetchWithTimeout() helper that wraps every AI fetch in an
AbortController. Deadlines are set per-vendor:
OpenAI 90 s (web_search_preview adds latency)
Anthropic 60 s
- AbortError is caught and rethrown as a structured message that names the
vendor, the elapsed timeout, three likely causes, and the suggestion to
retry or switch vendors with --vendor.
- Applied to all four call sites: callOpenAI, callAnthropic,
callOpenAIReply, callAnthropicReply.
src/commands/interests/create.tsx
src/commands/interests/update.tsx
- Spinner label for --from-prompt now includes the max expected wait time
so operators know the long wait is normal and not a hang:
'Generating interest via openai... (may take up to 90s with web search)'
There was a problem hiding this comment.
Pull request overview
Adds request deadlines and clearer operator feedback for AI-backed --from-prompt flows so the CLI doesn’t hang indefinitely on slow/stalled vendor responses.
Changes:
- Introduces a
fetchWithTimeoutwrapper insrc/lib/ai.tsand applies it to OpenAI/Anthropic interest generation + interactive reply generation calls. - Adds per-vendor timeout constants (OpenAI 90s, Anthropic 60s).
- Updates the
interests create/update --from-promptspinner label to indicate expected maximum wait time.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| src/lib/ai.ts | Adds a timeout wrapper around AI fetch() calls and applies it to all vendor request sites. |
| src/commands/interests/create.tsx | Updates the --from-prompt spinner label to include an expected max wait time. |
| src/commands/interests/update.tsx | Updates the --from-prompt spinner label to include an expected max wait time. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- fetchWithTimeout now accepts a processResponse callback that wraps both the fetch() call and the body consumption (res.json()). The AbortController timer stays active until processResponse resolves or rejects, ensuring a stalled body download is caught by the same deadline as a stalled connection. clearTimeout moved to finally so the timer is always cleaned up. - Timeout error message is now vendor-aware: the OpenAI web_search bullet is only appended when vendorLabel includes 'openai', avoiding misleading output when the failing vendor is Anthropic. - OPENAI_TIMEOUT_MS and ANTHROPIC_TIMEOUT_MS are now exported from ai.ts. Spinner labels in create.tsx and update.tsx import these constants instead of hard-coding '90'/'60', and compute vendor via a single getVendor() call. The 'with web search' qualifier in the spinner is now conditional on vendor === 'openai' so Anthropic labels no longer mention web search.
Contributor
Author
|
@1a35e1 all threads resolved — ready for re-review. Summary of changes pushed:
|
Merged
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bug
sonar interests create --from-promptandsonar interests update --from-promptcall OpenAI/Anthropic with no request timeout. The OpenAI path uses theweb_search_previewtool, which can legitimately take 30–60 s on a healthy connection. Any network hiccup, provider slowdown, or rate-limit event causes the spinner to hang indefinitely with no feedback — the process must be killed manually.The same issue affects
callOpenAIReply/callAnthropicReply(used in interactive mode).Root cause
All four
fetch()call sites insrc/lib/ai.tspass nosignal, so requests have no deadline.Fix
src/lib/ai.tsIntroduced
fetchWithTimeout(url, init, timeoutMs, vendorLabel)— a small wrapper that attaches anAbortControllerto every AI request. OnAbortErrorit throws a structured message that:--vendorPer-vendor deadlines are generous to avoid false positives:
web_search_previewadds significant latencyApplied to all four call sites:
callOpenAI,callAnthropic,callOpenAIReply,callAnthropicReply.src/commands/interests/create.tsx+update.tsxSpinner label for
--from-promptnow includes the expected max wait time so operators can distinguish a slow-but-normal call from a genuine hang:Verification
Reproduce by pointing the fetch at a server that stalls:
The timeout values (
OPENAI_TIMEOUT_MS,ANTHROPIC_TIMEOUT_MS) are named constants inai.tsfor easy adjustment.