fix(providers): unify OPENAI_TIMEOUT_MS + AGENTMEMORY_LLM_TIMEOUT_MS (#446)#453
Conversation
…446) v0.9.17 shipped OPENAI_TIMEOUT_MS scoped to the OpenAI LLM provider (inline AbortController, default 60s). PR #379 then shipped AGENTMEMORY_LLM_TIMEOUT_MS in the shared src/providers/_fetch.ts helper used by every other raw-fetch provider (Gemini, OpenRouter, MiniMax, OpenAI/Cohere/Voyage/OpenRouter embedding). Two env vars, same value, different names — ops confusion. Unify on the global name while keeping back-compat: 1. OPENAI_TIMEOUT_MS — OpenAI-scoped alias, takes precedence 2. AGENTMEMORY_LLM_TIMEOUT_MS — global fall-back across providers 3. 60 000 ms default The OpenAI LLM provider now routes through the shared fetchWithTimeout helper, dropping ~30 lines of duplicate AbortController + clearTimeout plumbing. Existing users with OPENAI_TIMEOUT_MS set keep the exact v0.9.17 behaviour; new users setting AGENTMEMORY_LLM_TIMEOUT_MS get the OpenAI LLM path covered too. README + .env.example now document AGENTMEMORY_LLM_TIMEOUT_MS as the canonical name and note OPENAI_TIMEOUT_MS as the OpenAI-scoped alias. 4 new precedence tests in test/fetch-timeout.test.ts cover all four env-var combinations.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThis PR unifies LLM provider timeout configuration by introducing ChangesOpenAI provider timeout unification with precedence
Sequence Diagram(s)sequenceDiagram
participant Constructor
participant resolveTimeout
participant Environment
participant call
participant fetchWithTimeout
Constructor->>resolveTimeout: initialize timeoutMs
resolveTimeout->>Environment: read OPENAI_TIMEOUT_MS
Environment-->>resolveTimeout: value or unset
resolveTimeout->>Environment: read AGENTMEMORY_LLM_TIMEOUT_MS
Environment-->>resolveTimeout: value or unset
resolveTimeout-->>Constructor: return timeoutMs (or default 60000)
call->>fetchWithTimeout: POST request with timeoutMs
fetchWithTimeout-->>call: response or AbortError
call->>call: catch AbortError, rethrow timeout error mentioning env vars
Possibly related PRs
Poem
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ESLint
ESLint skipped: no ESLint configuration detected in root package.json. To enable, add Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/providers/openai.ts (1)
116-122: ⚡ Quick winRemove WHAT-style inline comments in the changed blocks.
These comments describe implementation mechanics rather than intent; prefer clear naming/docblocks and keep call-site comments minimal.
As per coding guidelines, "Do not use code comments explaining WHAT — use clear naming instead".
Also applies to: 167-171
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/providers/openai.ts` around lines 116 - 122, Remove the WHAT-style inline comments within the fetch block around the fetchWithTimeout call and the OPENAI timeout logic; instead add a short docblock at the top of the providers/openai.ts module (or rename local helpers) that documents the intent and precedence: that fetchWithTimeout owns AbortController/cleanup and that OPENAI_TIMEOUT_MS falls back to AGENTMEMORY_LLM_TIMEOUT_MS and then 60s. Update or rename any local helper variables/functions if needed to make the mechanics obvious (e.g., fetchWithTimeout, OPENAI_TIMEOUT_MS, AGENTMEMORY_LLM_TIMEOUT_MS) and delete the inline explanatory comment near the `let response: Response;` declaration (and the similar block at lines 167-171) so call-site comments are minimal and intent is captured by naming/docblock.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/providers/openai.ts`:
- Around line 184-188: The current parsePositiveInt uses parseInt which allows
trailing non-digits; change parsePositiveInt to validate the raw string strictly
(e.g., trim it and ensure it matches a digits-only pattern like /^\d+$/) before
converting, then parse with Number or parseInt and return the positive integer
or undefined; update the logic in parsePositiveInt to reject malformed values
(e.g., "30ms") so OPENAI_TIMEOUT_MS and similar envs only accept strictly
numeric positive integers.
---
Nitpick comments:
In `@src/providers/openai.ts`:
- Around line 116-122: Remove the WHAT-style inline comments within the fetch
block around the fetchWithTimeout call and the OPENAI timeout logic; instead add
a short docblock at the top of the providers/openai.ts module (or rename local
helpers) that documents the intent and precedence: that fetchWithTimeout owns
AbortController/cleanup and that OPENAI_TIMEOUT_MS falls back to
AGENTMEMORY_LLM_TIMEOUT_MS and then 60s. Update or rename any local helper
variables/functions if needed to make the mechanics obvious (e.g.,
fetchWithTimeout, OPENAI_TIMEOUT_MS, AGENTMEMORY_LLM_TIMEOUT_MS) and delete the
inline explanatory comment near the `let response: Response;` declaration (and
the similar block at lines 167-171) so call-site comments are minimal and intent
is captured by naming/docblock.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3b806b87-505f-41e3-827f-8458ad2706ef
📒 Files selected for processing (4)
.env.exampleREADME.mdsrc/providers/openai.tstest/fetch-timeout.test.ts
CodeRabbit caught parseInt("30ms", 10) silently returning 30 in the
timeout-resolve path. Real users hitting this would think they bound
the call to 30ms when the regex would have rejected it.
parsePositiveInt now rejects anything that isn't pure digits via
/^\d+$/ (after trim). parseInt's lenience on trailing units /
underscores / signs is gone — those fall back to the 60s default
instead of masquerading as an aggressive bound.
New regression test covers "30ms", "1_000", "60s", "30abc", "-30",
"0". Whitespace padding (e.g. " 30 ") is still accepted — that's
normal env-var handling.
992/992 tests pass on the worktree.
Closes #446.
Summary
v0.9.17 shipped
OPENAI_TIMEOUT_MSscoped to the OpenAI LLM provider (inlineAbortController, default 60s). PR #379 then shippedAGENTMEMORY_LLM_TIMEOUT_MSin the sharedsrc/providers/_fetch.tshelper used by every other raw-fetch provider (Gemini, OpenRouter, MiniMax, OpenAI/Cohere/Voyage/OpenRouter embedding).Two env vars with the same value but different names — ops confusion. This PR unifies them under the global name while preserving exact v0.9.17 behaviour for existing users.
Resolution
The OpenAI LLM provider now resolves its outbound-fetch timeout in this order:
OPENAI_TIMEOUT_MS— OpenAI-scoped alias, takes precedence (back-compat)AGENTMEMORY_LLM_TIMEOUT_MS— global fall-back across providers60_000ms defaultOpenAIProvider.call()now routes through the sharedfetchWithTimeouthelper fromsrc/providers/_fetch.ts, dropping ~30 lines of duplicateAbortController+clearTimeoutplumbing. The shared helper already owns the abort + cleanup path used by every other raw-fetch provider.Behaviour
OPENAI_TIMEOUT_MS=...keep the exact v0.9.17 behaviour.AGENTMEMORY_LLM_TIMEOUT_MS=...get the OpenAI LLM path covered too.OPENAI_TIMEOUT_MSwins (locked by an explicit precedence test).Docs
.env.exampledocumentsAGENTMEMORY_LLM_TIMEOUT_MSas the canonical name.README.mdkeeps a one-line note thatOPENAI_TIMEOUT_MSis the OpenAI-scoped alias and recommends the global form for new configs.Files
src/providers/openai.ts— refactor to usefetchWithTimeout, add precedence resolver, update JSDoc.test/fetch-timeout.test.ts— 4 new precedence tests underOpenAIProvider timeout env precedence (#446).README.md— clarify both env vars and the precedence rule..env.example— documentAGENTMEMORY_LLM_TIMEOUT_MSas the canonical name.Test plan
npm test— 991 tests pass (was 987 + 4 new).npm run build— clean.OPENAI_TIMEOUT_MS alone,AGENTMEMORY_LLM_TIMEOUT_MS alone, both set, neither set).Summary by CodeRabbit
New Features
Documentation
Tests