fix(integrations): fall back to default backend when api_url points at local AI (#51, #80, #7Z)#1630
Conversation
…solver
Introduce two helpers used by the integrations client to stop routing
backend-proxied requests at a user-set local-AI endpoint:
* `looks_like_local_ai_endpoint(url)` — tight heuristic. True when the
host is a loopback name (127.0.0.1 / localhost / ::1 / 0.0.0.0) or
when the path explicitly names the OpenAI-style chat-completions
endpoint (/v1/chat/completions or /v1/completions). Bare /v1 is NOT
matched on purpose — many real self-hosted backends use it as a
version prefix.
* `effective_integrations_api_url(api_url)` — same resolution chain as
`effective_api_url` but skips the user override when it looks like a
local-AI URL, falling through to env / default backend. Emits one
`warn!` per process via `std::sync::Once` so the diagnostic shows up
in core logs without spamming on every request.
Why: `config.api_url` doubles as the chat-completions base AND the
backend base. Users pointing it at Ollama / vLLM had every
`/agent-integrations/composio/*` request 404 against their local LLM,
flooding Sentry (cluster OPENHUMAN-TAURI-51 / -80 / -7Z) and silently
breaking Composio for them. The integrations resolver lets chat keep
the override while integrations fall back to the hosted backend.
Tests cover positive/negative classification (loopback hosts incl. IPv6,
chat-completions paths, real backends, OpenAI public API, garbage input)
and fallback behaviour (local override skipped, env wins when set,
real backends respected, agrees with `effective_api_url` when there is
no override).
Refs: OPENHUMAN-TAURI-51, OPENHUMAN-TAURI-80, OPENHUMAN-TAURI-7Z
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…inyhumansai#51, tinyhumansai#80, #7Z) Switch `IntegrationClient::build_client` from `effective_api_url` to `effective_integrations_api_url` so users who set `config.api_url` to a local Ollama / vLLM endpoint don't have every `/agent-integrations/*` request concatenated onto that local URL (which only knows about chat-completions and 404s every other path). Sentry impact: * OPENHUMAN-TAURI-51 — 11 events: `GET http://127.0.0.1:11434/v1/ agent-integrations/composio/toolkits 404` * OPENHUMAN-TAURI-80 — 1 event: `GET http://127.0.0.1:8080/v1/ chat/completions/agent-integrations/composio/connections 404` * OPENHUMAN-TAURI-7Z — 1 event: same shape via `llm_provider.api_error: OpenHuman API error (404 Not Found)` with the Ollama 404 body. Beyond the Sentry flood this also restores actual Composio / channels / teams functionality for any user pointing chat at a local LLM — they were silently broken before this change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a local-AI detection heuristic and an integrations-specific API URL resolver that ignores user overrides pointing at local endpoints (with a one-time warning). Updates the integrations client to use the new resolver and extends unit tests for both heuristic and resolver behaviors. ChangesIntegrations API URL Resolution
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
graycyrus
left a comment
There was a problem hiding this comment.
Review — PR #1630
Walkthrough
This PR fixes a real and actively-reported user pain point: anyone who sets config.api_url to a local Ollama or vLLM endpoint for chat completions had every backend-proxied integration request (Composio toolkits, channels, billing, teams, credentials, ...) concatenated onto that URL, producing 404s from the local LLM and a Sentry flood. The fix introduces looks_like_local_ai_endpoint, a carefully-scoped heuristic that detects loopback hosts and explicit chat-completions paths, then effective_integrations_api_url, which skips the user override when it looks like a local AI endpoint and falls through to env/default instead. The implementation is clean, well-documented, and comes with solid test coverage.
The core mechanic is correct and the Sentry cluster will stop firing for IntegrationClient callers once this ships.
Changes
| File | Summary |
|---|---|
src/api/config.rs |
New looks_like_local_ai_endpoint heuristic, effective_integrations_api_url resolver, warn_integrations_url_fallback_once one-shot logger, and 8 unit tests |
src/openhuman/integrations/client.rs |
build_client switched from effective_api_url to effective_integrations_api_url |
Actionable comments
[major] src/api/config.rs:74-75 — contains vs ends_with inconsistency in path check
The check for /v1/chat/completions uses path.contains(...) but /v1/completions uses path.ends_with(...). A URL like https://real-backend.example/audit/v1/chat/completions-logs would match contains and be misclassified. Both arms should use ends_with for consistency:
// before
path.contains("/v1/chat/completions") || path.ends_with("/v1/completions")
// after
path.ends_with("/v1/chat/completions") || path.ends_with("/v1/completions")[major] src/openhuman/integrations/client.rs:241-246 — doc comment still references old resolver
The rustdoc says effective_api_url but the implementation now uses effective_integrations_api_url. Should be updated to match.
[minor] Incomplete fix scope — 38+ other effective_api_url callers remain broken for local-AI users
The same class of bug exists across most non-integrations domains: billing/ops.rs, team/ops.rs, referral/ops.rs, webhooks/ops.rs, credentials/ops.rs (9 call sites), channels/controllers/ops.rs (9 call sites), voice/, socket/, app_state/, core/jsonrpc.rs, etc. The cleanest near-term fix would be to rename effective_integrations_api_url to effective_backend_api_url and make it the default for all non-chat callers. This PR makes things no worse (these were already broken), but a follow-up issue should be filed explicitly tracking these callers.
Nitpick
src/api/config.rs:51—trimmedis computed buturl::Url::parsereceivestrimmedcorrectly. However, theurl::Url::parse(trimmed)line is fine as-is — ignore if no concern.
Verified / looks good
url::Hosttyped matching handles IPv4-mapped IPv6 and::1correctlystd::sync::Onceone-shot warning prevents log spam- No panic on malformed URLs (garbage input test confirms)
ENV_LOCKmutex used consistently with existing test suite- CI failures are all in
composio::ops::tests— pre-existing, unrelated to this PR - No secrets or PII in new log output
…nsai#1630 CR) graycyrus majors: 1. `looks_like_local_ai_endpoint` used `path.contains("/v1/chat/completions")` asymmetric with `path.ends_with("/v1/completions")`. A real backend URL like `/audit/v1/chat/completions-logs` would have been misclassified as local-AI and silently dropped. Both arms now use `ends_with`. 2. `build_client` rustdoc still referenced the old `effective_api_url` resolver - updated to `effective_integrations_api_url`. 3. Filed follow-up issue tinyhumansai#1663 tracking the 38+ other `effective_api_url` callers across non-integrations domains (billing, team, referral, webhooks, credentials, channels, voice). TODO note added in the resolver pointing at the issue. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
graycyrus review Round 1 addressed in
Open question: do you want the typed- |
Extends `looks_like_local_ai_endpoint` to include `addr.is_private()` on the IPv4 match arm so LAN-hosted Ollama (e.g. 192.168.x.x, 10.x.x.x, 172.16.x.x) is correctly classified as local-AI, preventing integration requests from being routed at the local LLM and 404-ing. Also clarifies the `warn_integrations_url_fallback_once` doc comment to explicitly state the Once guard suppresses subsequent calls even when a different local-AI URL is used. Test coverage: replaced ambiguous 10.0.0.5 non-loopback test with a public IP (203.0.113.5) and .example TLD; added new `looks_like_local_ai_matches_private_lan_hosts` covering all three RFC 1918 ranges. 21/21 lib tests pass. Addresses @coderabbitai review on src/api/config.rs.
|
Deferred items (context for reviewers — no action needed on this PR) Two minor items were identified during the review pass but are intentionally out of scope:
Neither item affects correctness or CI for this PR. |
Loopback alone is too aggressive a local-AI signal: integration tests (e.g. composio/ops_tests.rs) bind mock backends on `127.0.0.1:<random port>` with no path, and were being silently rerouted to the production backend by `effective_integrations_api_url`, producing 401s in CI. Tighten the heuristic so a loopback / private RFC 1918 host classifies as local-AI only when paired with an additional LLM signal — a known port (11434 Ollama, 8000 vLLM, 8080, 1234, 8888) or a `/v1` path. All real-world Sentry cases (OPENHUMAN-TAURI-51/-80/-7Z) include one of these signals, and ad-hoc test mocks include neither. Promotes the chat-completions path check above the host check so `/v1/chat/completions` on any host (LAN, tunnel, public IP) still matches — preserves the existing non-loopback test assertion. Adds a regression test asserting bare loopback + random port is NOT classified as local-AI. cargo test --lib: 6608 pass, 0 fail (including all 39 composio ops tests that were failing under the previous heuristic).
Summary
config.api_urlto a local-AI endpoint (Ollama:11434/v1, vLLM:8080/...) had every backend integration request concatenated onto that URL, flooding Sentry with 404s from Ollama/vLLM and silently breaking Composio for them.IntegrationClient::build_client: if the override looks like a local-AI endpoint (loopback host or/v1/chat/completionspath), fall through to env/default for the integrations base. Chat completions still use the override.Problem
effective_api_urlreturns whatever the user set inconfig.api_url. Local-AI users set it to their Ollama / vLLM URL so chat completions hit the local model. ButIntegrationClient::build_clientreused the same URL as the base for ALL backend-proxied paths (composio, channels, teams, billing, …). Result: requests like/agent-integrations/composio/toolkitsgot concatenated onto the local-AI URL → 404 from the local LLM →report_errorto Sentry → flood, plus silent UX breakage where Composio just didn't work for these users.Sentry IDs in scope (~13 events total in 3d):
OPENHUMAN-TAURI-51— 11 events:GET http://127.0.0.1:11434/v1/agent-integrations/composio/toolkits 404OPENHUMAN-TAURI-80— 1 event:GET http://127.0.0.1:8080/v1/chat/completions/agent-integrations/composio/connections 404(double-concat onto vLLM)OPENHUMAN-TAURI-7Z— 1 event: same shape viallm_provider.api_error: OpenHuman API error (404 Not Found)with the Ollama 404 page bodySolution
looks_like_local_ai_endpoint(url) -> boolinsrc/api/config.rs. Tight heuristic: loopback host (127.0.0.1/localhost/::1/0.0.0.0) OR path explicitly names the OpenAI chat-completions endpoint (/v1/chat/completions//v1/completions). Bare/v1is not matched — many self-hosted backends use it as a legit version prefix and over-matching would silently break real users.effective_integrations_api_url(api_url) -> String— same resolution chain aseffective_api_url, but the user override is skipped when it matcheslooks_like_local_ai_endpoint, falling through to env / default backend. One-shotwarn!(viastd::sync::Once) when the fallback fires so users see the diagnostic in core logs.IntegrationClient::build_clientnow uses the new helper.effective_api_urlsemantics are unchanged — local-AI chat needs the override to keep working.Considered (and deferred): splitting
config.api_urlinto a separatebackend_api_urlfield. That's the "correct" architectural fix but it's a schema migration touching the config UI, dotfile reader, and migration story. The current guard is narrow, reversible, and ships the user-facing fix today; the split can come later if anyone wants explicit control of the integrations base independently.Submission Checklist
looks_like_local_ai_endpoint(loopback v4/v6, chat-completions path on non-loopback, real backends incl.api.openai.com/v1, garbage input)effective_integrations_api_url(local override → default, local override + env → env, real override respected, no-override agrees witheffective_api_url)cargo test --lib openhuman::integrationspasses (67/67)cargo test --lib apipasses (124/124, includes new helper tests)cargo fmt --checkcleanbackend_api_urlfield deferred — see Solution)cargo clippy --workspace -- -D warningsis currently failing onupstream/mainitself (78 errors insrc/openhuman/wallet/execution.rs,webview_apis/client.rs,webview_notifications/types.rs,tools/impl/cron/add.rs— allmanual_is_multiple_of/type_complexity/derivable_implsfrom a clippy version bump). None of the errors touch the files in this PR. Confirmed viagrep -E "api/config\.rs|integrations/client\.rs"over the clippy output — zero hits.Impact
effective_api_urlis untouched.Related
Summary by CodeRabbit