feat(onboarding): rework desktop onboarding and welcome timing#915
feat(onboarding): rework desktop onboarding and welcome timing#915senamakel merged 15 commits intotinyhumansai:mainfrom
Conversation
…d webview login integration - Updated onboarding process to use a routed flow with dedicated pages for each step. - Removed the OnboardingOverlay component and replaced it with a more structured onboarding layout. - Introduced webview login modals for account connections, starting with Gmail. - Enhanced routing logic to redirect users based on onboarding completion status. - Updated privacy information to reflect changes in data handling and user permissions. - Incremented version of the openhuman package in Cargo.lock to 0.52.28.
…a CDP
Replace the Composio Gmail OAuth flow in onboarding with an in-app
webview login + a CDP-driven LinkedIn-enrichment pipeline.
Tauri shell
- cdp::Snapshot::capture_with_rects + Rect accessor — parses
documents[*].layout.bounds so callers can resolve a click target's
CSS rect for Input.dispatchMouseEvent.
- cdp::input — new module with click(x,y), type_text(s), press_key(Key),
select_all_in_focused. Pure CDP, no JS injection.
- gmail::reads::search — drives the live Gmail search input via mouse
+ keystrokes (clicks aria-label="Search mail", clears with Cmd-A
+ Backspace, types the query, presses Enter), polls the snapshot
for results, scrapes thread rows by data-legacy-thread-id (with
permthid=thread-f:<id> anchor-href fallback).
- gmail::cdp_find_linkedin_profile_url + Tauri command — search
from:linkedin.com, fetch up to 5 hits via the existing print_view
+ cdp_fetch path, regex-extract linkedin.com/comm/in/<u> (preferred)
or /in/<u>.
- webview_accounts::on_page_load: skip about: scheme too (not just
data:), so the placeholder load doesn't claim the dedup slot and
black out the real Gmail load. Add popup_should_navigate_parent for
Gmail/google-meet — Sign-in popups are now denied at the new-window
hook and the parent webview is navigated to the URL instead, so
the auth flow lands in the existing webview's cookie jar (was
spawning unmanaged blank popups before). Emit
`webview-account:navigate` from on_navigation so the frontend can
observe post-login redirects (the existing `:load` event dedups
per cold open and missed accounts.google.com → mail.google.com).
Core
- learning_linkedin_enrichment now accepts an optional preset
profile_url; when supplied, stage 1 (Composio Gmail search) is
skipped and we go straight to Apify scrape + PROFILE.md write. The
controller schema picks up an optional `profile_url` input.
Frontend
- WebviewLoginModal — new keepAliveOnConnected prop. On successful
sign-in the webview is left running off-screen so the next step can
drive it via CDP; cancel still purges. Dual-listener for
webview-account:load AND webview-account:navigate to auto-detect
Gmail's mail.google.com URL after sign-in. safeUnlisten swallows the
StrictMode double-cleanup "Cannot read properties of undefined
(reading 'handlerId')" rejection.
- SkillsStep — drops the Composio toolkit grid, ships a single Gmail
card driven by WebviewLoginModal. onNext now hands the parent
{ sources, gmailAccountId } so downstream steps can drive the
webview's CDP session.
- OnboardingLayout — draft holds gmailAccountId; completeAndExit
purges the kept-alive webview at end of flow.
- SkillsPage routes to /onboarding/context whenever webview:gmail or
any composio:* source is connected; otherwise straight to finish.
- ContextPage threads gmailAccountId through.
- ContextGatheringStep runs a 2-step pipeline: invoke
gmail_find_linkedin_profile_url (Tauri/CDP) for stage 1, then
callCoreRpc('learning_linkedin_enrichment', { profile_url }) for
Apify scrape + PROFILE.md.
- Conversations: hide the chat header + bottom quota row during
welcomeLocked so the onboarding chat is just the conversation.
- Accounts: comment out the RespondQueuePanel + its selectors.
- Onboarding: comment out the chat-provider stub step.
Misc
- Tauri shell: install rustls ring crypto provider in run() — the
vendored tauri-cef dev-server proxy builds a reqwest 0.13 client
that panics with "No provider set" otherwise.
Tests
- 5 new Rust unit tests (rect parsing, popup helpers, permthid + URL
extractors).
- Vitest: SkillsStep + ContextGatheringStep updated for the new
payload shape and 2-step pipeline; 4 new ContextGatheringStep
cases for the webview path.
Wires up the Gmail-side scaffolding for the LinkedIn-enrichment
pipeline. Search across the live Gmail UI, drive it via CDP, and
hand the results to the existing core enrichment RPC. URL-extraction
from rendered thread bodies is still rough (Gmail's URL-hash thread
ids are different from the row ids we scrape) and currently returns
None on most accounts — tracked as follow-up.
Tauri shell
- cdp::Snapshot now merges every frame document into a single flat
node tree, with parent + layout indices offset per document. Gmail
email bodies render inside an iframe; without this, scrapers
couldn't see the actual message HTML at all.
- cdp::input continues to expose click / type_text / press_key, but
gmail::reads::search no longer drives the input. Direct
Page.navigate to `#search/<query>` is more deterministic.
- gmail::reads::search:
- Two-step navigate (#inbox → #search/...) + 1.2 s settle to dodge
the "previous-search-rows-still-rendered" race.
- Modern Gmail thread rows are `<tr class="zA">` and the thread-id
attributes (`data-legacy-thread-id`, `data-thread-id`,
`data-message-id`, `data-thread-perm-id`) live on descendants,
NOT the `<tr>` itself. New `scrape_row_thread_id` walks the
descendants in priority order.
- gmail::reads::find_linkedin_profile_url:
- Tries a sequence of progressively broader queries
(`from:invitations-noreply@linkedin.com` →
`from:notifications-noreply@linkedin.com` →
`from:linkedin.com`).
- Opens each candidate thread via Page.navigate to
`#inbox/<id>`, snapshots the rendered DOM (now iframe-aware),
and runs the URL extractor on every `<a href>`. Also unwraps
Google click-trackers (`?q=<encoded_dest>`).
- URL-extractor `extract_linkedin_url` already substring-matches
`linkedin.com/comm/in/` and `linkedin.com/in/`, so regional
subdomains (`ae.linkedin.com`, `in.linkedin.com`, ...) work
without code changes.
- gmail::print_view_url auto-detects hex vs decimal ids and uses
`?th=<hex>` for inbox-UI ids, `?permthid=thread-f:<dec>` for
atom-feed ids.
- webview_accounts::on_page_load now skips the `about:blank#…`
placeholder load (in addition to `data:`) so the dedup slot stays
free for the real-page load. Diagnosed earlier: native fires
`Finished` for the placeholder synchronously on spawn, claiming
the load slot, suppressing the real-page event, and leaving the
user looking at an unloaded (black) child surface.
- webview_accounts on_navigation emits a new `webview-account:navigate`
event for every committed nav so the frontend can observe the
post-login redirect (`webview-account:load` is dedup'd per cold
open and missed accounts.google.com → mail.google.com).
- popup_should_navigate_parent for gmail/google-meet: Sign-in popups
no longer spawn unmanaged blank windows; the parent webview is
navigated to the popup target instead so the auth flow lands in
the existing webview's cookie jar.
- CEF debug port moved from 9222 to 19222. Some installs of ollama
squat on `127.0.0.1:9222`, silently breaking CDP attach (our WS
client hits ollama, the handshake fails, child webviews stay at
`about:blank`). Updated cdp::CDP_PORT, lib.rs --remote-debugging-port,
and whatsapp_scanner::CDP_PORT.
- New permission `gmail_*` commands added to the `allow-core-process`
capability so the frontend can `invoke('gmail_find_linkedin_profile_url',
...)` from ContextGatheringStep.
- CEF cold-start warmup at app boot was reverted (was suspected of
black-webview interference; turned out to be unrelated).
Frontend
- ContextGatheringStep mounts an offscreen WebviewHost
(left:-10000, top:-10000, 1280×900) while the pipeline runs. The
kept-alive gmail webview from SkillsStep is hidden by the modal's
unmount and `wv.hide()` — but a hidden CEF surface stops rendering
so CDP can't drive its UI. Mounting another WebviewHost triggers
the warm-reuse path which repositions the same webview to the
offscreen rect; Chromium considers it visible (NSView is in the
view tree) so DOM updates and Input dispatch keep working, but the
user never sees the automation.
- ContextGatheringStep.test mocks WebviewHost (it reads the redux
accounts slice that the test harness doesn't initialise).
Tests
- 7/7 gmail::reads unit tests pass (including new regional-subdomain
case covering `ae.linkedin.com/comm/in/<u>?<query>`).
- 18/18 onboarding vitest cases pass.
…issions - Simplified the profile URL fetching logic in ContextGatheringStep for better readability. - Updated the onboarding text to clarify the use of connected Gmail for profile building. - Changed the label of the OnboardingNextButton for improved user experience. - Enhanced permissions in allow-core-process.toml to include new Gmail-CDP commands. - Adjusted remote debugging port in multiple files to avoid conflicts with existing services.
…mments - Added comprehensive comments to the clearAllAppData function to clarify the steps involved in clearing app data, including session logout, workspace folder deletion, storage purging, and hard reload. - Improved error handling for the purging of redux-persist storage to ensure robustness during data clearing operations.
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds CDP-driven Gmail interactions and DOM rect capture, LinkedIn enrichment tooling and learning controllers, router-based onboarding with webview login handling, Tauri rustls install and new commands, CDP port change to 19222, and multiple frontend/UI/test updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Renderer as Renderer
participant Core as Core RPC
participant GmailUI as Gmail UI
participant Apify as Apify
User->>Renderer: Request LinkedIn enrichment (onboarding/context)
Renderer->>Core: tools.composio_execute (GMAIL_FETCH_EMAILS)
Core->>GmailUI: Drive Gmail via CDP (navigate, capture, click using rects)
GmailUI-->>Core: DOM snapshots / message data
Core-->>Renderer: extracted profile_url (or none)
alt profile_url found
Renderer->>Core: tools.apify_linkedin_scrape(profile_url)
Core->>Apify: scrape request -> markdown
Apify-->>Core: markdown
Renderer->>Core: learning.save_profile(markdown, summarize:true)
Core-->>Core: write PROFILE.md & summarize
end
Renderer->>Core: (optional) agent.spawn_welcome via Tauri command
sequenceDiagram
participant Renderer as Renderer
participant Tauri as Tauri
participant Webview as Webview
participant User as User
Renderer->>Tauri: Open WebviewLoginModal (provider=gmail)
Tauri->>Webview: Create webview & navigate to accounts.google.com
User->>Webview: Sign in in webview
Webview->>Tauri: Commit navigation (logged-in URL)
Tauri-->>Renderer: Emit webview-account:navigate (redacted URL)
Renderer->>Renderer: Detect logged-in pattern -> mark connected -> onConnected(accountId)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
app/src/AppRoutes.tsx (1)
126-142:⚠️ Potential issue | 🟡 MinorRemove duplicate
/notificationsroute definitions.There are three identical
/notificationsroute definitions (lines 99-106, 126-133, and 135-142). Only one is needed; the duplicates appear to be a copy-paste error.🐛 Proposed fix
</Route> - <Route - path="/notifications" - element={ - <ProtectedRoute requireAuth={true}> - <Notifications /> - </ProtectedRoute> - } - /> - - <Route - path="/notifications" - element={ - <ProtectedRoute requireAuth={true}> - <Notifications /> - </ProtectedRoute> - } - /> - <Route path="/settings/*"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/AppRoutes.tsx` around lines 126 - 142, In AppRoutes.tsx there are multiple duplicate Route entries for path "/notifications" rendering <ProtectedRoute requireAuth={true}><Notifications/></ProtectedRoute>; remove the extra copies and keep only a single Route for "/notifications" (search for Route with path="/notifications" and the Notifications component) so that only one Route declaration for this path remains; ensure no other logic or props are lost when deleting the duplicates.app/src-tauri/src/whatsapp_scanner/mod.rs (1)
1-5:⚠️ Potential issue | 🟡 MinorUpdate stale port reference in module docstring.
Line 4 still references the old port 9222:
(set via--remote-debugging-port=9222inlib.rs)This should be updated to 19222 to match the actual configuration.
📝 Proposed fix
//! We talk to the embedded CEF instance through its remote-debugging port -//! (set via `--remote-debugging-port=9222` in `lib.rs`). Per tracked +//! (set via `--remote-debugging-port=19222` in `lib.rs`). Per tracked🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/whatsapp_scanner/mod.rs` around lines 1 - 5, Update the module docstring that mentions the remote debugging port: change the hardcoded port reference "(set via `--remote-debugging-port=9222` in `lib.rs`)" to use 19222 so it matches the actual configuration; locate this text in the top-level doc comment of whatsapp_scanner (the module-level comment) and replace 9222 with 19222.src/openhuman/config/ops.rs (1)
564-600:⚠️ Potential issue | 🟡 MinorAdd regression coverage for the new onboarding-transition semantics.
This PR changes
set_onboarding_completed(false -> true)from “seed cron + maybe fire welcome” to “seed cron only,” but the file still has no tests around that branch. A small unit test here would protect against silently reintroducing double welcomes or accidentally dropping the cron seeding path. As per coding guidelines, "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/config/ops.rs` around lines 564 - 600, Add a unit test for set_onboarding_completed that verifies the false→true path only seeds cron jobs and does NOT trigger the welcome agent: arrange a config with onboarding_completed = false, replace or spy on crate::openhuman::cron::seed::seed_proactive_agents (test-only shim or mock) to record invocation, also replace the RPC/handler used for agent.spawn_welcome (or add a spy that would panic if called), call set_onboarding_completed(true) (use tokio::test async), await completion, then assert seed_proactive_agents was invoked and spawn_welcome was not; use the existing functions load_config_with_timeout/save via a temporary test config or isolated test fixture and clean up state afterwards.
🧹 Nitpick comments (12)
app/src/features/privacy/whatLeavesItems.ts (1)
13-17: Remove commented-out item from shipped constants.Keeping disabled entries as comments (Line 13–Line 17) makes the privacy list harder to maintain and easier to desync from actual behavior.
Proposed cleanup
export const WHAT_LEAVES_ITEMS: PrivacyLeaveItem[] = [ - // { - // id: 'model-downloads', - // title: 'Model downloads & updates', - // body: 'Local model weights are pulled from the model hub when you install or update them. After that, inference runs on your machine.', - // }, { id: 'cloud-providers',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/features/privacy/whatLeavesItems.ts` around lines 13 - 17, Remove the commented-out privacy list entry for the now-disabled item (the object with id 'model-downloads' and title 'Model downloads & updates') from the shipped constants in whatLeavesItems.ts so the maintained list only contains active entries; locate the commented block referencing 'model-downloads' and delete it to avoid stale commented code and potential desync with actual behavior.src/openhuman/agent/prompts/mod.rs (1)
666-864: Subagent prompts don't receive the global style suffix.
SystemPromptBuilder::build(line 161) appendsGLOBAL_STYLE_SUFFIX, butrender_subagent_system_prompt_with_formatdoes not. This creates an inconsistency where main agents get the em-dash rule but subagents (code_executor, critic, planner, etc.) rendered via this path don't.If the no-em-dash rule should apply uniformly to all agent outputs, consider appending it here as well:
🔧 Suggested fix
// 6. Runtime banner — model name only. Stable for the lifetime of // this sub-agent's definition. let _ = writeln!(out, "## Runtime\n\nModel: {model_name}"); + // 7. Global style rules (em-dash prohibition, etc.) + out.push('\n'); + out.push_str(GLOBAL_STYLE_SUFFIX); out }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/prompts/mod.rs` around lines 666 - 864, render_subagent_system_prompt_with_format is missing the GLOBAL_STYLE_SUFFIX that SystemPromptBuilder::build appends, causing inconsistent styling between main agents and subagents; update the end of render_subagent_system_prompt_with_format (just before returning out) to append GLOBAL_STYLE_SUFFIX (or the same mechanism SystemPromptBuilder::build uses) ensuring proper spacing/newline (e.g., push_str with a preceding newline if needed) so subagent prompts get the global style suffix too.app/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsx (1)
55-135: Add one rejection-path test for the staged pipeline.These tests cover the happy path and skip branches, but the renderer now has three async failure points. A single test for an Apify or
learning_save_profilerejection would lock in the stage-error UI instead of only the success terminal state.As per coding guidelines "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top" and "When building a new feature, define E2E scenarios (core RPC + app) up front that cover happy paths, failure modes, auth/policy gates, and regressions."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsx` around lines 55 - 135, Add a new unit test in ContextGatheringStep.test.tsx that simulates a rejection from one of the downstream RPC stages (either 'openhuman.tools_apify_linkedin_scrape' or 'openhuman.learning_save_profile') by having callCoreRpc.mockImplementation (or mockResolvedValueOnce / mockRejectedValueOnce) return an error for that method, then render <ContextGatheringStep connectedSources={['composio:gmail']} onNext={...} />, trigger the pipeline via fireEvent.click on the "Let's go!" button, and assert the UI shows the stage error state (e.g., the stage-error UI text or absence of 'Context Ready') and that subsequent RPCs are not called; reference callCoreRpc mock calls and the component ContextGatheringStep to locate where to add the test.app/src-tauri/src/gmail/mod.rs (1)
270-274: Inconsistent log level:log::info!vslog::debug!used elsewhere.Other Tauri command wrappers in this file (lines 198, 208, 220, 232, 238, 250, 260) use
log::debug!, but this one useslog::info!. While the docstring says "Debug command", the elevated log level may clutter production logs.♻️ Align log level with other commands
- log::info!("[gmail][tauri] gmail_find_linkedin_profile_url account_id={account_id}"); + log::debug!("[gmail][tauri] gmail_find_linkedin_profile_url account_id={account_id}");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/gmail/mod.rs` around lines 270 - 274, The log call in the Tauri command wrapper gmail_find_linkedin_profile_url is using log::info! while other wrappers use log::debug!; change the logging invocation in gmail_find_linkedin_profile_url to use log::debug! (keeping the same message and account_id interpolation) so it matches the existing pattern used by the other commands and aligns with the "Debug command" intent; the function cdp_find_linkedin_profile_url remains unchanged.app/src/pages/onboarding/steps/SkillsStep.tsx (1)
126-135: Consider memoizing status computations to avoid redundantderiveComposioStatecalls.
statusLabel(gmailConnection)andstatusColor(gmailConnection)each callderiveComposioStateinternally, resulting in multiple derivations of the same state per render. SincegmailStateis already computed at line 74, you could pass it directly or create a small helper.♻️ Optional: pass pre-computed state
-function statusLabel(connection: ComposioConnection | undefined): string { - switch (deriveComposioState(connection)) { +function statusLabel(state: ReturnType<typeof deriveComposioState>): string { + switch (state) {Then call
statusLabel(gmailState)instead ofstatusLabel(gmailConnection).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/steps/SkillsStep.tsx` around lines 126 - 135, The JSX is redundantly calling deriveComposioState via statusLabel(gmailConnection) and statusColor(gmailConnection) causing duplicate work; update the render to use the already-computed gmailState (from deriveComposioState at line where gmailState is created) or memoize the derived state and pass that into statusLabel and statusColor (e.g., call statusLabel(gmailState) and statusColor(gmailState) or compute a local const derived = deriveComposioState(gmailConnection) and use statusLabel(derived)/statusColor(derived)) so the state is computed once per render.src/openhuman/learning/schemas.rs (1)
212-217: Blocking filesystem I/O in async handler.
std::fs::create_dir_allandstd::fs::writeare blocking calls inside an async function. For a single small file write during onboarding, this is likely acceptable, but if this handler is called frequently or with large payloads, consider usingtokio::fsequivalents.♻️ Optional: Use async filesystem operations
+use tokio::fs; + let path = config.workspace_dir.join("PROFILE.md"); if let Some(parent) = path.parent() { - std::fs::create_dir_all(parent) + fs::create_dir_all(parent).await .map_err(|e| format!("create workspace dir failed: {e}"))?; } -std::fs::write(&path, &body).map_err(|e| format!("write PROFILE.md failed: {e}"))?; +fs::write(&path, &body).await.map_err(|e| format!("write PROFILE.md failed: {e}"))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/learning/schemas.rs` around lines 212 - 217, The code uses blocking std::fs calls (std::fs::create_dir_all and std::fs::write) inside an async handler when creating PROFILE.md; replace them with async Tokio equivalents: use tokio::fs::create_dir_all and tokio::fs::write (awaiting each), preserving the same error mapping messages, and ensure the enclosing async function remains async so you can await these calls; references to path, config.workspace_dir, and the PROFILE.md write logic should be updated accordingly to avoid blocking the reactor.app/src/pages/onboarding/steps/ContextGatheringStep.tsx (1)
43-50: Stage IDweb-searchis misleading — it actually runs Apify LinkedIn scrape.The internal stage ID
'web-search'(line 43) doesn't match the underlying operation (apifyScrapeLinkedIn). While the user-facing label "Researching you online" is fine, the code ID could confuse future maintainers expecting a web search operation.Consider renaming to
'linkedin-scrape'for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/steps/ContextGatheringStep.tsx` around lines 43 - 50, The Stage id 'web-search' in the STAGES array should be renamed to 'linkedin-scrape' to match the actual operation (apifyScrapeLinkedIn) and avoid misleading future maintainers; update the STAGES entry (Stage type remains the same) and then search for and replace all usages/branches/conditionals/tests that match the old id 'web-search' (e.g., switch/case handlers, feature flags, analytics events, apify invocation logic) so they reference 'linkedin-scrape' instead, and run tests to ensure no remaining references to 'web-search' remain.src/openhuman/tools/schemas.rs (1)
228-232: Unusedtimeout_secsparameter — intentionally no-op'd.The comment explains the validator strips unknown keys and Parallel governs its own timeout. Consider removing the parameter from the schema entirely if it's never used, or document in the schema comment that it's currently ignored.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/schemas.rs` around lines 228 - 232, The parameter timeout_secs in the parallelSearchSchema block is unused; either remove timeout_secs from the schema definition and all related structs/functions (and update any callers and validator mappings like parallelSearchSchema / agentIntegration.validator.ts) to stop expecting that field, or explicitly document in the schema comment that timeout_secs is intentionally ignored (e.g., replace the current throwaway let _ = timeout_secs; with a clear comment) so future readers know it’s deliberate; locate occurrences by the symbol timeout_secs and the parallelSearchSchema reference and apply the chosen change consistently across schema, validators, and callers.app/src-tauri/src/gmail/reads.rs (1)
649-659: Edge case: Pure-digit hex IDs would use the wrong URL format.The heuristic
message_id.chars().all(|c| c.is_ascii_digit())assumes pure-digit IDs are decimalthread-f:format. A hex ID like1234567890(no a-f) would be misclassified.In practice this is rare (Gmail hex IDs typically contain letters), and the print-view endpoint would 404 gracefully. The current behavior is acceptable given the low probability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/gmail/reads.rs` around lines 649 - 659, The current heuristic in print_view_url (message_id.chars().all(|c| c.is_ascii_digit())) misclassifies pure-digit hex IDs; remove the fragile digit-only check and always generate the thread-style view URL (use the "&th={escaped}" format) instead of the thread-f/permmsgid variant so pure-digit hex IDs won't be misrouted—update the logic in print_view_url (and the escaped variable construction) to consistently return the th= URL.app/src-tauri/src/cdp/input.rs (1)
64-69: Add debug-level logging for CDP input operations.Per coding guidelines, add "substantial development-oriented logging at entry/exit points, branch decisions, external calls." These functions make CDP calls but log nothing, making production debugging difficult.
Example logging additions
pub async fn click(cdp: &mut CdpConn, session: &str, x: f64, y: f64) -> Result<(), String> { + log::debug!("[cdp::input] click at ({x:.1}, {y:.1})"); let _ = mouse_event(cdp, session, "mouseMoved", x, y, 0).await?; let _ = mouse_event(cdp, session, "mousePressed", x, y, 1).await?; let _ = mouse_event(cdp, session, "mouseReleased", x, y, 1).await?; Ok(()) }pub async fn type_text(cdp: &mut CdpConn, session: &str, text: &str) -> Result<(), String> { + log::debug!("[cdp::input] type_text len={}", text.len()); for ch in text.chars() {pub async fn press_key(cdp: &mut CdpConn, session: &str, key: Key) -> Result<(), String> { let (key_name, code, vk) = key.cdp_fields(); + log::debug!("[cdp::input] press_key {key_name}"); cdp.call(Also applies to: 101-144, 148-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/cdp/input.rs` around lines 64 - 69, Add debug-level logging to the CDP input functions (e.g., click) so entry, exit, and each CDP call are recorded: log function entry with parameters (session, x, y), log before each mouse_event call referencing mouse_event(cdp, session, "mouseMoved"/"mousePressed"/"mouseReleased"), log the success result or capture and log any error returned from those calls, and log function exit on success; apply the same pattern to the other input functions in this file (lines 101-144, 148-177) to ensure consistent development-oriented tracing of CDP interactions using the existing logging facility (e.g., debug! or processLogger).src/openhuman/learning/linkedin_enrichment.rs (2)
1-759: Consider splitting this module to stay within the ~500-line guideline.This file has grown to 759 lines. Per coding guidelines, source files should be ≤ ~500 lines for maintainability. Consider extracting the profile rendering helpers (
render_profile_markdown,write_profile_md*) and/or the Gmail search logic into sibling modules.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/learning/linkedin_enrichment.rs` around lines 1 - 759, The module exceeds the ~500-line guideline; split it by extracting related responsibilities into smaller sibling modules. Move profile generation and LLM summarisation into a new module (e.g. profile/markdown.rs) containing render_profile_markdown, summarise_profile_with_llm, write_profile_md, and write_profile_md_url_only; move Gmail search logic into its own module (e.g. gmail/search.rs) containing search_gmail_for_linkedin and the COMM_RE/comm-related decoding logic; keep orchestration in the existing file and update imports and visibility accordingly; also consider moving memory persistence helpers (persist_linkedin_profile, persist_linkedin_url_only, build_memory_client) into a memory/persist.rs if desired to further reduce size.
94-115: Add unit tests for the early-exit and preset-URL flows.The new
preset_profile_urlbranch (lines 121-131) and the early-exit whenPROFILE.mdexists (lines 99-115) are untested. Existing tests only cover regex extraction. Consider adding tests that mockconfig.workspace_dirto verify:
- Early return with
Skippedstages whenPROFILE.mdis present.- Stage recorded as
Successwith the preset URL whenpreset_profile_urlisSome.As per coding guidelines: "Ship unit tests and coverage for behavior you are adding or changing."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/learning/linkedin_enrichment.rs` around lines 94 - 115, Add unit tests that exercise the early-exit and preset-URL code paths: create a test that sets config.workspace_dir to a temp dir containing a file named "PROFILE.md", call the enrichment entry function (the function that returns `result` using `config.workspace_dir`), and assert that result.log contains the skip message and that each EnrichmentStage (constructed with EnrichmentStage/StageStatus) is Skipped with detail "PROFILE.md already on disk"; create a second test that sets `config.preset_profile_url = Some(url)` and an empty workspace_dir, runs the same enrichment function, and assert that the stage for "build-profile" (or the stage that records the preset) is marked Success and that the preset URL is recorded in the result (or in the stage detail/log) as expected; use temporary directories and cleanup to isolate tests and mock only config.workspace_dir and preset_profile_url so existing regex tests remain unaffected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src-tauri/src/cdp/input.rs`:
- Around line 179-182: The doc comment incorrectly says "Triple-click to
select-all" while the implementation dispatches a Cmd+A keystroke; update that
comment to accurately describe the behavior (e.g., "Select-all via Cmd+A (or
Ctrl+A on non-mac)" or "Dispatch Cmd+A to select all the focused
contenteditable/input") so it matches the code that sends the Cmd+A key event in
this module (the comment block immediately above the Cmd+A dispatch in
input.rs).
- Around line 183-216: select_all_in_focused currently hardcodes modifiers = 4
(Meta/Cmd) which fails to trigger Ctrl+A on Windows/Linux; update the function
to compute a modifiers value based on platform (e.g., use cfg!(target_os =
"macos") to choose 4 for macOS and 2 for other OSes) and use that variable in
both Input.dispatchKeyEvent calls so the select-all shortcut works
cross-platform; reference the function select_all_in_focused and the two
cdp.call invocations for where to change the modifier value.
In `@app/src-tauri/src/webview_accounts/mod.rs`:
- Around line 1217-1229: The emitted payload currently forwards the full
committed `url` (in the `nav_app.emit` call), which may leak sensitive query
parameters; change the emission to redact or omit query/fragment before sending
— e.g., construct a redacted value from `url` (use `url::Url` methods to produce
either origin + path or a copy with `.query_pairs_mut().clear()` / remove
fragment) and include that redacted string (or separate safe fields like
`"origin"` and `"path"`) instead of the full `url` in the serde_json payload
alongside `nav_account_id` and `nav_provider` so no raw callback params are
forwarded to the renderer.
- Around line 268-279: popup_should_navigate_parent currently treats any
internal Google popup as allowed to navigate the parent; change it so for the
"gmail" and "google-meet" cases it only returns Some(url.clone()) when the Url
matches explicit Google auth/account-chooser patterns (e.g. hosts like
"accounts.google.com" or "accounts.googleusercontent.com" and path segments or
query params indicating signin/accountchooser/chooseaccount), otherwise return
None; keep the existing url_is_internal helper for other checks but replace the
broad call in popup_should_navigate_parent with an explicit host+path predicate
for auth flows so unrelated internal Google popups do not trigger parent
takeover.
In `@app/src/App.tsx`:
- Around line 85-108: The E2E relogin spec still asserts the old
OnboardingOverlay and misses the new routed onboarding redirect implemented in
the useEffect in App.tsx (keys: onboardingPending, onOnboardingRoute, navigate);
update the E2E scenarios to verify the routed flow by waiting for the app to
navigate to "/onboarding" when onboardingPending is true and to "/home" once
onboardingPending is cleared, replacing any waits/asserts for OnboardingOverlay
with route/path checks and then proceed with the existing happy-path and
regression assertions (include auth/relogin steps, failure modes, and policy
gate checks in the same spec).
In `@app/src/components/settings/SettingsHome.tsx`:
- Around line 45-60: The current flow calls
window.localStorage.clear()/window.sessionStorage.clear() and
window.location.reload() regardless of whether persistor.purge() succeeded,
which can rehydrate old state if purge (e.g., IndexedDB/localForage) failed;
change the logic so that persistor.purge() is awaited inside the try and only on
success perform localStorage/sessionStorage clears and call
window.location.reload(); in the catch for persistor.purge() do not reload —
instead log the error (persistor.purge() catch) and surface a user-safe fallback
(e.g., show a notification or abort the reset) so you never reload the app after
a failed purge.
In `@app/src/features/privacy/whatLeavesItems.ts`:
- Around line 20-21: The body string in whatLeavesItems.ts currently contradicts
the "Local by default" headline by saying "Most conversations are routed to the
cloud"; update the copy in the body to align with the headline (e.g., state that
conversations are processed locally by default, with optional cloud routing for
access to more powerful models or for explicit features), and ensure the change
is made where the object fields title and body are defined so the messaging is
consistent across the UI.
- Around line 25-26: Update the copy for the object containing the title and
body fields: change title "3rd Party Integrations" to "Third-party integrations"
(consistent capitalization/hyphenation) and update body to start with
"Third-party integrations" and add terminal punctuation (a period) at the end of
the sentence; locate the fields by their keys title and body in the
whatLeavesItems export to make the edits.
In `@app/src/pages/Conversations.tsx`:
- Around line 1290-1293: The billing UI must be fully suppressed when
welcomeLocked is true; instead of only hiding the quota/usage pills div, add a
top-level conditional around the entire billing block (the JSX region that
includes the "near-limit upsell", the budget/rate-limit banner and the
quota/usage pills div marked by the "Quota / usage pills" comment) so that if
welcomeLocked is true the whole block returns null (or is not rendered). Use the
existing welcomeLocked boolean to short-circuit rendering (avoid relying on
individual CSS 'hidden' classes) and remove or skip rendering the near-limit
upsell and the banner components when welcomeLocked is true.
In `@app/src/pages/onboarding/components/BetaBanner.tsx`:
- Around line 34-35: The BetaBanner copy prompts users to "Join our Discord to
claim!" but there is no link; update the BetaBanner component to make "Join our
Discord" a clickable anchor that uses the app's Discord URL (preferably from a
central constant or env like DISCORD_URL), opens in a new tab with rel="noopener
noreferrer", and includes an accessible label (e.g., aria-label="Join OpenHuman
Discord"); ensure the JSX in BetaBanner replaces the plain text fragment with
the anchor while preserving styling and any analytics/click tracking if present.
In `@app/src/pages/onboarding/pages/SkillsPage.tsx`:
- Around line 10-24: Add unit tests for SkillsPage that exercise both branches
of handleNext: one test where sources includes an item starting with "composio:"
and asserts navigate was called with "/onboarding/context" (mock navigate), and
another where no composio source is present and asserts completeAndExit was
awaited/called (mock completeAndExit) and navigate was not called;
instantiate/render SkillsPage (or call handleNext via the SkillsStep onNext
prop) and verify setDraft was updated with connectedSources in both cases. Use
spies/mocks for navigate and completeAndExit, provide a sample SkillsConnections
payload, and ensure the test asserts the correct side-effect for each branch.
In `@app/src/pages/onboarding/steps/ContextGatheringStep.tsx`:
- Around line 4-14: The doc comment describing the orchestration is out of sync:
it claims step 2 uses a "Parallel web search (tools_web_search)" but the
implementation calls apifyScrapeLinkedIn which invokes
tools_apify_linkedin_scrape; update the comment to match the code (or vice
versa). Specifically, in the ContextGatheringStep file adjust the step 2
description to reference apifyScrapeLinkedIn/tools_apify_linkedin_scrape (or
change the implementation to call tools_web_search if that was intended) so the
comment and the implementation (apifyScrapeLinkedIn,
tools_apify_linkedin_scrape, and any mention of tools_web_search) are
consistent.
In `@src/openhuman/agent/prompts/mod.rs`:
- Around line 161-173: Add a unit test to assert that
SystemPromptBuilder::build() appends GLOBAL_STYLE_SUFFIX by checking the
returned string contains the "## Output style" heading and the em-dash rule
(e.g., the substring "Do **not** use em-dashes" or the full
GLOBAL_STYLE_SUFFIX). Modify or add the test (e.g., in
prompt_builder_assembles_sections or a new test) to construct a
SystemPromptBuilder, call build(), and assert the output contains those
substrings so the constant and its appending logic are covered.
In `@src/openhuman/agent/schemas.rs`:
- Around line 271-281: The RPC handler handle_spawn_welcome currently always
spawns the proactive welcome task; add an idempotency guard by loading config
and checking config.chat_onboarding_completed before calling
crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome (or
alternatively implement a request-scoped once-guard) so that if
chat_onboarding_completed is true you skip spawning and return spawned: false
(and a clear log message); ensure any returned RpcOutcome reflects the skip and
keep the existing flow when the flag is false so the welcome agent and its
complete_onboarding tool continue to flip the flag after successful onboarding.
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 221-225: Update the "Voice: long-lost friend" opener text so it
stops instructing the model to surface location-level profile data from `###
PROFILE.md`; specifically remove any examples that suggest revealing "where they
live" and instead instruct referencing only low-sensitivity traits (e.g.,
hobbies, work role, interests) or to skip personal details unless the user has
already brought them up — modify the block containing the "Voice: long-lost
friend" guidance and the `### PROFILE.md` reference to explicitly prohibit
location/PII disclosure and prefer neutral, low-sensitivity mentions.
In `@src/openhuman/integrations/parallel.rs`:
- Around line 38-56: The SearchResponse/SearchResultItem structs currently mask
missing backend fields by using serde defaults for cost_usd and title; remove
the #[serde(rename = "costUsd", default)] on cost_usd in SearchResponse and the
#[serde(default)] on title in SearchResultItem so these fields are required
(keep cost_usd as f64 and title as String) so deserialization will fail when the
backend omits them, then update any callers/tests (including the unit test that
expects missing costUsd to fail) to handle or assert the deserialization error
accordingly (symbols: SearchResponse::cost_usd, SearchResultItem::title).
In `@src/openhuman/learning/linkedin_enrichment.rs`:
- Around line 316-327: The openhuman_dir assignment using
config.config_path.parent().map(...) can be None for bare filenames; update the
ProviderRuntimeOptions construction so openhuman_dir falls back when parent() is
None (e.g., use config.workspace_dir or another sensible default) before calling
create_backend_inference_provider; ensure you reference
ProviderRuntimeOptions.openhuman_dir and config.config_path.parent() and prefer
config.workspace_dir (or config.config_path itself if workspace_dir is
unavailable) so the AuthService will locate the JWT.
---
Outside diff comments:
In `@app/src-tauri/src/whatsapp_scanner/mod.rs`:
- Around line 1-5: Update the module docstring that mentions the remote
debugging port: change the hardcoded port reference "(set via
`--remote-debugging-port=9222` in `lib.rs`)" to use 19222 so it matches the
actual configuration; locate this text in the top-level doc comment of
whatsapp_scanner (the module-level comment) and replace 9222 with 19222.
In `@app/src/AppRoutes.tsx`:
- Around line 126-142: In AppRoutes.tsx there are multiple duplicate Route
entries for path "/notifications" rendering <ProtectedRoute
requireAuth={true}><Notifications/></ProtectedRoute>; remove the extra copies
and keep only a single Route for "/notifications" (search for Route with
path="/notifications" and the Notifications component) so that only one Route
declaration for this path remains; ensure no other logic or props are lost when
deleting the duplicates.
In `@src/openhuman/config/ops.rs`:
- Around line 564-600: Add a unit test for set_onboarding_completed that
verifies the false→true path only seeds cron jobs and does NOT trigger the
welcome agent: arrange a config with onboarding_completed = false, replace or
spy on crate::openhuman::cron::seed::seed_proactive_agents (test-only shim or
mock) to record invocation, also replace the RPC/handler used for
agent.spawn_welcome (or add a spy that would panic if called), call
set_onboarding_completed(true) (use tokio::test async), await completion, then
assert seed_proactive_agents was invoked and spawn_welcome was not; use the
existing functions load_config_with_timeout/save via a temporary test config or
isolated test fixture and clean up state afterwards.
---
Nitpick comments:
In `@app/src-tauri/src/cdp/input.rs`:
- Around line 64-69: Add debug-level logging to the CDP input functions (e.g.,
click) so entry, exit, and each CDP call are recorded: log function entry with
parameters (session, x, y), log before each mouse_event call referencing
mouse_event(cdp, session, "mouseMoved"/"mousePressed"/"mouseReleased"), log the
success result or capture and log any error returned from those calls, and log
function exit on success; apply the same pattern to the other input functions in
this file (lines 101-144, 148-177) to ensure consistent development-oriented
tracing of CDP interactions using the existing logging facility (e.g., debug! or
processLogger).
In `@app/src-tauri/src/gmail/mod.rs`:
- Around line 270-274: The log call in the Tauri command wrapper
gmail_find_linkedin_profile_url is using log::info! while other wrappers use
log::debug!; change the logging invocation in gmail_find_linkedin_profile_url to
use log::debug! (keeping the same message and account_id interpolation) so it
matches the existing pattern used by the other commands and aligns with the
"Debug command" intent; the function cdp_find_linkedin_profile_url remains
unchanged.
In `@app/src-tauri/src/gmail/reads.rs`:
- Around line 649-659: The current heuristic in print_view_url
(message_id.chars().all(|c| c.is_ascii_digit())) misclassifies pure-digit hex
IDs; remove the fragile digit-only check and always generate the thread-style
view URL (use the "&th={escaped}" format) instead of the thread-f/permmsgid
variant so pure-digit hex IDs won't be misrouted—update the logic in
print_view_url (and the escaped variable construction) to consistently return
the th= URL.
In `@app/src/features/privacy/whatLeavesItems.ts`:
- Around line 13-17: Remove the commented-out privacy list entry for the
now-disabled item (the object with id 'model-downloads' and title 'Model
downloads & updates') from the shipped constants in whatLeavesItems.ts so the
maintained list only contains active entries; locate the commented block
referencing 'model-downloads' and delete it to avoid stale commented code and
potential desync with actual behavior.
In `@app/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsx`:
- Around line 55-135: Add a new unit test in ContextGatheringStep.test.tsx that
simulates a rejection from one of the downstream RPC stages (either
'openhuman.tools_apify_linkedin_scrape' or 'openhuman.learning_save_profile') by
having callCoreRpc.mockImplementation (or mockResolvedValueOnce /
mockRejectedValueOnce) return an error for that method, then render
<ContextGatheringStep connectedSources={['composio:gmail']} onNext={...} />,
trigger the pipeline via fireEvent.click on the "Let's go!" button, and assert
the UI shows the stage error state (e.g., the stage-error UI text or absence of
'Context Ready') and that subsequent RPCs are not called; reference callCoreRpc
mock calls and the component ContextGatheringStep to locate where to add the
test.
In `@app/src/pages/onboarding/steps/ContextGatheringStep.tsx`:
- Around line 43-50: The Stage id 'web-search' in the STAGES array should be
renamed to 'linkedin-scrape' to match the actual operation (apifyScrapeLinkedIn)
and avoid misleading future maintainers; update the STAGES entry (Stage type
remains the same) and then search for and replace all
usages/branches/conditionals/tests that match the old id 'web-search' (e.g.,
switch/case handlers, feature flags, analytics events, apify invocation logic)
so they reference 'linkedin-scrape' instead, and run tests to ensure no
remaining references to 'web-search' remain.
In `@app/src/pages/onboarding/steps/SkillsStep.tsx`:
- Around line 126-135: The JSX is redundantly calling deriveComposioState via
statusLabel(gmailConnection) and statusColor(gmailConnection) causing duplicate
work; update the render to use the already-computed gmailState (from
deriveComposioState at line where gmailState is created) or memoize the derived
state and pass that into statusLabel and statusColor (e.g., call
statusLabel(gmailState) and statusColor(gmailState) or compute a local const
derived = deriveComposioState(gmailConnection) and use
statusLabel(derived)/statusColor(derived)) so the state is computed once per
render.
In `@src/openhuman/agent/prompts/mod.rs`:
- Around line 666-864: render_subagent_system_prompt_with_format is missing the
GLOBAL_STYLE_SUFFIX that SystemPromptBuilder::build appends, causing
inconsistent styling between main agents and subagents; update the end of
render_subagent_system_prompt_with_format (just before returning out) to append
GLOBAL_STYLE_SUFFIX (or the same mechanism SystemPromptBuilder::build uses)
ensuring proper spacing/newline (e.g., push_str with a preceding newline if
needed) so subagent prompts get the global style suffix too.
In `@src/openhuman/learning/linkedin_enrichment.rs`:
- Around line 1-759: The module exceeds the ~500-line guideline; split it by
extracting related responsibilities into smaller sibling modules. Move profile
generation and LLM summarisation into a new module (e.g. profile/markdown.rs)
containing render_profile_markdown, summarise_profile_with_llm,
write_profile_md, and write_profile_md_url_only; move Gmail search logic into
its own module (e.g. gmail/search.rs) containing search_gmail_for_linkedin and
the COMM_RE/comm-related decoding logic; keep orchestration in the existing file
and update imports and visibility accordingly; also consider moving memory
persistence helpers (persist_linkedin_profile, persist_linkedin_url_only,
build_memory_client) into a memory/persist.rs if desired to further reduce size.
- Around line 94-115: Add unit tests that exercise the early-exit and preset-URL
code paths: create a test that sets config.workspace_dir to a temp dir
containing a file named "PROFILE.md", call the enrichment entry function (the
function that returns `result` using `config.workspace_dir`), and assert that
result.log contains the skip message and that each EnrichmentStage (constructed
with EnrichmentStage/StageStatus) is Skipped with detail "PROFILE.md already on
disk"; create a second test that sets `config.preset_profile_url = Some(url)`
and an empty workspace_dir, runs the same enrichment function, and assert that
the stage for "build-profile" (or the stage that records the preset) is marked
Success and that the preset URL is recorded in the result (or in the stage
detail/log) as expected; use temporary directories and cleanup to isolate tests
and mock only config.workspace_dir and preset_profile_url so existing regex
tests remain unaffected.
In `@src/openhuman/learning/schemas.rs`:
- Around line 212-217: The code uses blocking std::fs calls
(std::fs::create_dir_all and std::fs::write) inside an async handler when
creating PROFILE.md; replace them with async Tokio equivalents: use
tokio::fs::create_dir_all and tokio::fs::write (awaiting each), preserving the
same error mapping messages, and ensure the enclosing async function remains
async so you can await these calls; references to path, config.workspace_dir,
and the PROFILE.md write logic should be updated accordingly to avoid blocking
the reactor.
In `@src/openhuman/tools/schemas.rs`:
- Around line 228-232: The parameter timeout_secs in the parallelSearchSchema
block is unused; either remove timeout_secs from the schema definition and all
related structs/functions (and update any callers and validator mappings like
parallelSearchSchema / agentIntegration.validator.ts) to stop expecting that
field, or explicitly document in the schema comment that timeout_secs is
intentionally ignored (e.g., replace the current throwaway let _ = timeout_secs;
with a clear comment) so future readers know it’s deliberate; locate occurrences
by the symbol timeout_secs and the parallelSearchSchema reference and apply the
chosen change consistently across schema, validators, and callers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d679a96f-4093-4c65-8950-7bad7030387d
⛔ Files ignored due to path filters (1)
app/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (47)
app/src-tauri/Cargo.tomlapp/src-tauri/permissions/allow-core-process.tomlapp/src-tauri/src/cdp/input.rsapp/src-tauri/src/cdp/mod.rsapp/src-tauri/src/cdp/snapshot.rsapp/src-tauri/src/gmail/mod.rsapp/src-tauri/src/gmail/reads.rsapp/src-tauri/src/lib.rsapp/src-tauri/src/webview_accounts/mod.rsapp/src-tauri/src/whatsapp_scanner/mod.rsapp/src/App.tsxapp/src/AppRoutes.tsxapp/src/components/DefaultRedirect.tsxapp/src/components/OnboardingOverlay.tsxapp/src/components/ProtectedRoute.tsxapp/src/components/__tests__/OnboardingOverlay.test.tsxapp/src/components/settings/SettingsHome.tsxapp/src/features/privacy/WhatLeavesMyComputerSheet.tsxapp/src/features/privacy/whatLeavesItems.tsapp/src/lib/coreState/__tests__/store.test.tsapp/src/pages/Accounts.tsxapp/src/pages/Conversations.tsxapp/src/pages/onboarding/Onboarding.tsxapp/src/pages/onboarding/OnboardingContext.tsxapp/src/pages/onboarding/OnboardingLayout.tsxapp/src/pages/onboarding/components/BetaBanner.tsxapp/src/pages/onboarding/components/WebviewLoginModal.tsxapp/src/pages/onboarding/pages/ChatProviderPage.tsxapp/src/pages/onboarding/pages/ContextPage.tsxapp/src/pages/onboarding/pages/SkillsPage.tsxapp/src/pages/onboarding/pages/WelcomePage.tsxapp/src/pages/onboarding/steps/ContextGatheringStep.tsxapp/src/pages/onboarding/steps/SkillsStep.test.tsxapp/src/pages/onboarding/steps/SkillsStep.tsxapp/src/pages/onboarding/steps/WelcomeStep.tsxapp/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsxapp/src/pages/onboarding/steps/__tests__/WelcomeStep.test.tsxsrc/openhuman/agent/agents/welcome/prompt.mdsrc/openhuman/agent/prompts/mod.rssrc/openhuman/agent/schemas.rssrc/openhuman/agent/welcome_proactive.rssrc/openhuman/config/ops.rssrc/openhuman/integrations/parallel.rssrc/openhuman/learning/linkedin_enrichment.rssrc/openhuman/learning/schemas.rssrc/openhuman/tools/impl/network/web_search.rssrc/openhuman/tools/schemas.rs
💤 Files with no reviewable changes (3)
- app/src/features/privacy/WhatLeavesMyComputerSheet.tsx
- app/src/components/OnboardingOverlay.tsx
- app/src/components/tests/OnboardingOverlay.test.tsx
| #[derive(Debug, Deserialize, serde::Serialize)] | ||
| pub struct SearchResponse { | ||
| #[serde(rename = "searchId")] | ||
| #[allow(dead_code)] | ||
| pub(crate) search_id: String, | ||
| pub(crate) results: Vec<SearchResultItem>, | ||
| #[serde(rename = "costUsd")] | ||
| pub(crate) cost_usd: f64, | ||
| pub search_id: String, | ||
| pub results: Vec<SearchResultItem>, | ||
| #[serde(rename = "costUsd", default)] | ||
| pub cost_usd: f64, | ||
| } | ||
|
|
||
| #[derive(Debug, Deserialize)] | ||
| pub(crate) struct SearchResultItem { | ||
| pub(crate) url: String, | ||
| pub(crate) title: String, | ||
| pub(crate) publish_date: Option<String>, | ||
| pub(crate) excerpts: Vec<String>, | ||
| #[derive(Debug, Deserialize, serde::Serialize)] | ||
| pub struct SearchResultItem { | ||
| pub url: String, | ||
| #[serde(default)] | ||
| pub title: String, | ||
| #[serde(default)] | ||
| pub publish_date: Option<String>, | ||
| #[serde(default)] | ||
| pub excerpts: Vec<String>, |
There was a problem hiding this comment.
Keep required search fields strict, or handle their absence end-to-end.
Defaulting costUsd to 0.0 and title to "" hides backend contract regressions. The search formatter will now emit blank titles / $0.0000, and the existing unit test at Lines 1001-1006 still expects missing costUsd to fail.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/integrations/parallel.rs` around lines 38 - 56, The
SearchResponse/SearchResultItem structs currently mask missing backend fields by
using serde defaults for cost_usd and title; remove the #[serde(rename =
"costUsd", default)] on cost_usd in SearchResponse and the #[serde(default)] on
title in SearchResultItem so these fields are required (keep cost_usd as f64 and
title as String) so deserialization will fail when the backend omits them, then
update any callers/tests (including the unit test that expects missing costUsd
to fail) to handle or assert the deserialization error accordingly (symbols:
SearchResponse::cost_usd, SearchResultItem::title).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/openhuman/integrations/parallel.rs (1)
44-56:⚠️ Potential issue | 🟠 MajorKeep
costUsd/titlestrict to avoid masking backend contract regressions.At Line 44 and Line 51,
#[serde(default)]turns missing required backend fields into silent fallbacks (cost_usd = 0.0, emptytitle), which can hide integration breakage and emit misleading output downstream.Suggested fix
#[derive(Debug, Deserialize, serde::Serialize)] pub struct SearchResponse { #[serde(rename = "searchId")] #[allow(dead_code)] pub search_id: String, pub results: Vec<SearchResultItem>, - #[serde(rename = "costUsd", default)] + #[serde(rename = "costUsd")] pub cost_usd: f64, } #[derive(Debug, Deserialize, serde::Serialize)] pub struct SearchResultItem { pub url: String, - #[serde(default)] pub title: String, #[serde(default)] pub publish_date: Option<String>, #[serde(default)] pub excerpts: Vec<String>, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/parallel.rs` around lines 44 - 56, The serde defaults on cost_usd and title are masking missing backend fields; remove #[serde(default)] from the cost_usd field in the struct with serde(rename = "costUsd") and from SearchResultItem::title so deserialization fails on missing values (or change the types to Option<T> if they are truly optional), ensuring you keep the serde rename for cost_usd and update any callers/tests that relied on the previous fallback behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/integrations/parallel.rs`:
- Around line 1001-1009: The test
search_response_defaults_missing_cost_usd_to_zero only asserts cost_usd; extend
it to cover all newly-defaulted SearchResponse fields: after deserializing into
SearchResponse, add assertions that resp.title equals the empty string
(resp.title == ""), resp.publish_date is None (resp.publish_date == None) or the
concrete default used by the struct, and resp.excerpts is an empty collection
(resp.excerpts.is_empty()), in addition to the existing cost_usd and results
checks so the test verifies title, publish_date, excerpts, and cost_usd default
behavior for SearchResponse.
---
Duplicate comments:
In `@src/openhuman/integrations/parallel.rs`:
- Around line 44-56: The serde defaults on cost_usd and title are masking
missing backend fields; remove #[serde(default)] from the cost_usd field in the
struct with serde(rename = "costUsd") and from SearchResultItem::title so
deserialization fails on missing values (or change the types to Option<T> if
they are truly optional), ensuring you keep the serde rename for cost_usd and
update any callers/tests that relied on the previous fallback behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9e6c2917-c4de-4be3-84f6-237e47810445
📒 Files selected for processing (2)
app/src/features/privacy/WhatLeavesMyComputerSheet.test.tsxsrc/openhuman/integrations/parallel.rs
✅ Files skipped from review due to trivial changes (1)
- app/src/features/privacy/WhatLeavesMyComputerSheet.test.tsx
| fn search_response_defaults_missing_cost_usd_to_zero() { | ||
| let json = r#"{ | ||
| "searchId": "s123", | ||
| "results": [] | ||
| }"#; | ||
| assert!(serde_json::from_str::<SearchResponse>(json).is_err()); | ||
| let resp: SearchResponse = serde_json::from_str(json).unwrap(); | ||
| assert_eq!(resp.search_id, "s123"); | ||
| assert!(resp.results.is_empty()); | ||
| assert_eq!(resp.cost_usd, 0.0); |
There was a problem hiding this comment.
Expand tests for all newly-defaulted search fields, not just costUsd.
At Line 1001, coverage only validates missing costUsd. The same change set also defaulted title, publish_date, and excerpts (Line 51–56), but those behaviors are not tested yet.
As per coding guidelines: "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/integrations/parallel.rs` around lines 1001 - 1009, The test
search_response_defaults_missing_cost_usd_to_zero only asserts cost_usd; extend
it to cover all newly-defaulted SearchResponse fields: after deserializing into
SearchResponse, add assertions that resp.title equals the empty string
(resp.title == ""), resp.publish_date is None (resp.publish_date == None) or the
concrete default used by the struct, and resp.excerpts is an empty collection
(resp.excerpts.is_empty()), in addition to the existing cost_usd and results
checks so the test verifies title, publish_date, excerpts, and cost_usd default
behavior for SearchResponse.
|
@coderabbitai comments fix. pls rereview |
|
🧠 Learnings used✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src-tauri/src/whatsapp_scanner/mod.rs (1)
87-89:⚠️ Potential issue | 🟡 MinorStale port reference in scanner startup comment.
Line 88 still says
port 9222; this conflicts with the migrated19222setting and can mislead future debugging.📝 Suggested comment fix
- // port 9222 means scan_once will fail on every tick anyway). Keep + // port 19222 means scan_once will fail on every tick anyway). KeepBased on learnings: New/changed behavior ships with matching rustdoc / code comments; update
AGENTS.mdor architecture docs when rules or user-visible behavior change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/whatsapp_scanner/mod.rs` around lines 87 - 89, The comment in whatsapp_scanner::mod.rs referencing "port 9222" is stale; update that comment to reflect the migrated debugging port "19222" (or to reference the configurable port constant/setting if one exists) so it doesn't mislead future debugging of the cdp/scanner startup path, and ensure corresponding documentation (e.g., AGENTS.md or architecture docs) is updated to match the new port or note the config location.
🧹 Nitpick comments (4)
app/src-tauri/src/whatsapp_scanner/mod.rs (1)
37-39: Avoid duplicating CDP port constants across modules.Lines 37-39 hardcode
CDP_PORTagain whileapp/src-tauri/src/cdp/mod.rsalready defines the shared value. This can silently drift and break scanner attach when one side changes.♻️ Proposed refactor
-const CDP_HOST: &str = "127.0.0.1"; -// Must match `--remote-debugging-port=19222` in lib.rs and -// `cdp::CDP_PORT`. Was 9222, moved to dodge ollama's listener. -const CDP_PORT: u16 = 19222; +#[cfg(feature = "cef")] +const CDP_HOST: &str = crate::cdp::CDP_HOST; +#[cfg(not(feature = "cef"))] +const CDP_HOST: &str = "127.0.0.1"; + +#[cfg(feature = "cef")] +const CDP_PORT: u16 = crate::cdp::CDP_PORT; +#[cfg(not(feature = "cef"))] +const CDP_PORT: u16 = 19222;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/whatsapp_scanner/mod.rs` around lines 37 - 39, The duplicated CDP port constant in whatsapp_scanner (const CDP_PORT) must be removed and the shared value from the cdp module used instead; delete the local const and import the canonical constant (e.g. add use crate::cdp::CDP_PORT; or cdp::CDP_PORT where referenced) so all code reads the single source of truth (CDP_PORT) from the cdp module and update any local references to use that imported symbol.app/src/pages/onboarding/steps/SkillsStep.tsx (1)
77-87: Add namespaced debug logs around the onboarding transition path.This block introduces a new async branch/external-call/error path (
onNext) but currently emits no diagnostic logs. Please add lightweight namespaced debug logs for start, branch (gmailConnected), success, and error (without PII) to make onboarding failures traceable.Proposed patch
const handleContinue = async () => { setError(null); setSubmitting(true); + console.debug('[onboarding:skills-step] continue:start', { + gmailConnected, + gmailState, + }); try { const sources = gmailConnected ? ['composio:gmail'] : []; + console.debug('[onboarding:skills-step] continue:dispatch', { + sourceCount: sources.length, + }); await onNext({ sources }); + console.debug('[onboarding:skills-step] continue:success'); } catch (e) { + console.debug('[onboarding:skills-step] continue:error', { + message: e instanceof Error ? e.message : 'unknown', + }); setError(e instanceof Error ? e.message : 'Something went wrong. Please try again.'); } finally { setSubmitting(false); } };As per coding guidelines
**/*.{rs,ts,tsx}: Add substantial development-oriented logging at entry/exit points, branch decisions, external calls, retries/timeouts, state transitions, and error paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/steps/SkillsStep.tsx` around lines 77 - 87, Add lightweight namespaced debug logs around the onboarding transition in handleContinue: log an entry message when the function starts, log the branch decision for gmailConnected (include the namespace e.g., "onboarding:skills:handleContinue"), log a success message after await onNext returns, and log an error message in the catch block that includes the error.message (without PII). Keep existing state updates (setSubmitting, setError) unchanged and ensure logs run before/after those calls so entry/exit and branch are traceable for handleContinue and the onNext call.app/src/features/privacy/whatLeavesItems.ts (1)
31-31: Subhead wording can be tightened for clarity.“what does, and when” reads a bit awkwardly; consider naming the subject explicitly (e.g., “what leaves your device and when”).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/features/privacy/whatLeavesItems.ts` at line 31, Update the constant WHAT_LEAVES_SUBHEAD to use a clearer subject—replace the current string "For full transparency, here's exactly what does, and when." with wording like "For full transparency, here's exactly what leaves your device and when." so the subject is explicit and the phrasing reads naturally.src/openhuman/tools/schemas.rs (1)
165-197: Consider adding entry-point debug logging.The handler correctly validates input, builds the client, and executes the tool. Per coding guidelines, adding
debug!ortrace!level logging at entry/exit points would improve observability.♻️ Suggested logging enhancement
fn handle_composio_execute(params: Map<String, Value>) -> ControllerFuture { Box::pin(async move { + tracing::debug!("[tools.composio_execute] entry"); let action = params .get("action") .and_then(Value::as_str) .map(str::to_string) .ok_or_else(|| "missing required `action`".to_string())?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/schemas.rs` around lines 165 - 197, The handler handle_composio_execute lacks entry/exit debug logging; add debug! (or trace!) calls—one at the start of handle_composio_execute after extracting action/action_args to log the incoming action and whether params were provided (avoid logging sensitive contents), and one after the client.execute_tool call to log outcome details like resp.successful and resp.cost_usd (and include resp.error when present) using the debug! macro from the crate's logging setup; place these logs near the existing let action = ... and right after let resp = ... to improve observability without changing behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src-tauri/src/cdp/input.rs`:
- Around line 71-236: Add unit tests that mock or stub CdpConn to capture and
assert the sequence and payloads of CDP calls for click (mouseMoved →
mousePressed → mouseReleased via mouse_event), type_text (per-character keyDown
→ char → keyUp with correct "text"/"key"/"unmodifiedText"), press_key
(rawKeyDown → keyUp with cdp_fields values), and select_all_in_focused
(rawKeyDown/keyUp including SELECT_ALL_MODIFIER); ensure tests validate order,
"type"/"button"/"clickCount"/"x"/"y"/"key"/"code"/"windowsVirtualKeyCode"/"nativeVirtualKeyCode"/"modifiers"
fields and include platform-specific modifier behavior for SELECT_ALL_MODIFIER
by parameterizing or simulating platforms in the mock CdpConn used by click,
mouse_event, type_text, press_key, and select_all_in_focused.
In `@app/src-tauri/src/gmail/mod.rs`:
- Around line 165-176: The docstring for reads::find_linkedin_profile_url is
outdated: update it to state that the function now clicks Gmail search result
rows and scrapes the live thread DOM to extract LinkedIn profile URLs (instead
of fetching print-view bodies and regex-matching them), mention that no JS
injection is used and that the extraction happens from the live thread DOM via
the CDP-driven UI interaction, and note that it still returns None when no
parsable profile URL is found; ensure the docstring references this
click-and-scrape behavior and removes any mention of print-view body regex
matching.
In `@app/src-tauri/src/gmail/reads.rs`:
- Around line 553-561: The current code uses href.find("?q=") and a partial
uppercase-only replacement to extract the q param which fails for non-leading q
params and lowercase percent-escapes; update the parsing to properly parse the
query string (e.g., split on '?' then parse key=value pairs or use a query
parser) to locate the "q" parameter regardless of position, fully percent-decode
the value (handling both uppercase and lowercase hex like %2f and %2F) before
calling extract_linkedin_url, and replace the fragile dest_raw/dest logic in
this block that references href and dest_raw; add a regression unit test that
supplies a URL with a non-leading q parameter containing lowercase-escaped
LinkedIn URL (e.g., ...&q=https%3a%2f%2fwww.linkedin.com%2fin%2f...) and assert
extract_linkedin_url succeeds.
- Around line 427-441: Empty clickable rows from collect_row_click_targets
should not be treated as a benign miss; instead detect Gmail-specific states
(auth/interstitial or transient rerender) before continuing. Update the logic
around Snapshot::capture_with_rects/collect_row_click_targets: when
rows.is_empty() return, call a new helper (e.g.,
is_gmail_ready_or_no_results(&snap) or inspect snap DOM/text for known markers
like interstitial/login banners or "No conversations" / mailbox-empty elements)
and only treat it as a true no-results when that helper confirms the Gmail-ready
no-results marker; otherwise retry or surface an error (do not silently
continue). Use symbols Snapshot::capture_with_rects, collect_row_click_targets,
rows.is_empty(), and account_id to locate and change the branch.
- Around line 126-130: The log statement currently emitting the raw,
caller-controlled `query` must be changed to avoid logging PII: update the
log::info calls that include `account_id`, `query`, and `limit` (the occurrences
around the existing log::info at the top and the similar log at lines 197–201)
so they do not print the raw `query`; instead log a non-sensitive representation
such as `query.len()` or a redacted/hashed value (e.g., a SHA-256 hex or
truncated hash) together with `account_id` and `limit`. Make the change for both
occurrences (`log::info!(..., query, limit)` and the later similar call) and add
any needed helper use/import for hashing if you choose the hash approach.
In `@app/src-tauri/src/webview_accounts/mod.rs`:
- Around line 294-305: The is_google_auth_popup logic currently checks path and
query parameter values for "signin", "accountchooser", and "chooseaccount" but
misses the "/ServiceLogin" flow; update the path.contains checks (variable path)
to also match "servicelogin" (case-insensitive) and extend the query_pairs
closure (k and v variables) so the matches! condition or the value checks
include "servicelogin" as a recognized auth flow, ensuring both the path and
value checks in is_google_auth_popup treat ServiceLogin popups as in-webview
auth.
- Around line 1348-1352: The log statement in webview_accounts (the log::info!
call that prints "[webview-accounts] new-window ... navigate parent
(provider={})" with url and popup_provider) is emitting the raw popup URL which
may contain sensitive query params; change it to log a redacted URL instead by
computing a redacted_url (e.g., parse the url with Url::parse and remove or mask
query parameter values, or construct a string with only scheme://host/path and
no query/fragment) and use redacted_url in the log call; add or reuse a small
helper (e.g., redact_url or redact_sensitive_query_params) to perform the
masking so the log line in webview_accounts::mod.rs prints the safe redacted_url
rather than the raw url.
In `@app/src/components/settings/SettingsHome.tsx`:
- Around line 45-54: The persistor.purge() catch block returns early without
resetting loading state which can leave the modal stuck; modify the error path
for persistor.purge() so after calling setError(...) you also reset the loading
state (e.g., call the state setter that controls isLoading) and re-enable modal
actions, or re-throw the error so the caller's catch can clear isLoading; update
the catch around persistor.purge() (referencing persistor.purge(), setError and
the isLoading state setter) to ensure isLoading is cleared and the modal becomes
interactive on failure.
In `@src/openhuman/agent/schemas.rs`:
- Around line 271-291: Add unit and JSON-RPC integration tests that cover both
branches of handle_spawn_welcome: when config_rpc::load_config_with_timeout
yields chat_onboarding_completed = true the handler should return spawned:false
and the skip message, and when false it should call
crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome and return
spawned:true with the dispatch message; write a unit test that injects/mocks
load_config_with_timeout to return the two config variants and asserts the
RpcOutcome JSON from handle_spawn_welcome, and add an e2e test in
tests/json_rpc_e2e.rs that calls the "agent.spawn_welcome" RPC and checks the
response for both cases (for the dispatched case, either stub or spy
spawn_proactive::spawn_proactive_welcome or assert the expected side-effect) so
the behavior is fully covered.
- Around line 273-285: The RPC currently only checks
config.chat_onboarding_completed before calling
crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome; also
require config.onboarding_completed to be true (the UI onboarding flag) before
spawning. Modify the spawn_welcome handler to short-circuit and return
RpcOutcome::new({ "spawned": false }, vec![...]) and a separate tracing::info!
message when config.onboarding_completed is false (distinct log from the
chat_onboarding_completed case), and only call spawn_proactive_welcome(config)
when both config.onboarding_completed and config.chat_onboarding_completed are
true.
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 231-233: The proactive welcome flow currently relies on the
run_single prompt to suppress complete_onboarding, which is unsafe; modify the
proactive execution code in src/openhuman/agent/welcome_proactive.rs so it
enforces a proactive-only tool allowlist (e.g., create a PROACTIVE_ALLOWED_TOOLS
set) and use it in the function handling proactive runs (the
run_single/proactive dispatcher) to permit only the check_onboarding_status tool
and reject or strip any attempts to invoke complete_onboarding or other tools;
ensure the proactive path explicitly validates tool names against that allowlist
before calling tools and returns an error or no-op for disallowed tools.
---
Outside diff comments:
In `@app/src-tauri/src/whatsapp_scanner/mod.rs`:
- Around line 87-89: The comment in whatsapp_scanner::mod.rs referencing "port
9222" is stale; update that comment to reflect the migrated debugging port
"19222" (or to reference the configurable port constant/setting if one exists)
so it doesn't mislead future debugging of the cdp/scanner startup path, and
ensure corresponding documentation (e.g., AGENTS.md or architecture docs) is
updated to match the new port or note the config location.
---
Nitpick comments:
In `@app/src-tauri/src/whatsapp_scanner/mod.rs`:
- Around line 37-39: The duplicated CDP port constant in whatsapp_scanner (const
CDP_PORT) must be removed and the shared value from the cdp module used instead;
delete the local const and import the canonical constant (e.g. add use
crate::cdp::CDP_PORT; or cdp::CDP_PORT where referenced) so all code reads the
single source of truth (CDP_PORT) from the cdp module and update any local
references to use that imported symbol.
In `@app/src/features/privacy/whatLeavesItems.ts`:
- Line 31: Update the constant WHAT_LEAVES_SUBHEAD to use a clearer
subject—replace the current string "For full transparency, here's exactly what
does, and when." with wording like "For full transparency, here's exactly what
leaves your device and when." so the subject is explicit and the phrasing reads
naturally.
In `@app/src/pages/onboarding/steps/SkillsStep.tsx`:
- Around line 77-87: Add lightweight namespaced debug logs around the onboarding
transition in handleContinue: log an entry message when the function starts, log
the branch decision for gmailConnected (include the namespace e.g.,
"onboarding:skills:handleContinue"), log a success message after await onNext
returns, and log an error message in the catch block that includes the
error.message (without PII). Keep existing state updates (setSubmitting,
setError) unchanged and ensure logs run before/after those calls so entry/exit
and branch are traceable for handleContinue and the onNext call.
In `@src/openhuman/tools/schemas.rs`:
- Around line 165-197: The handler handle_composio_execute lacks entry/exit
debug logging; add debug! (or trace!) calls—one at the start of
handle_composio_execute after extracting action/action_args to log the incoming
action and whether params were provided (avoid logging sensitive contents), and
one after the client.execute_tool call to log outcome details like
resp.successful and resp.cost_usd (and include resp.error when present) using
the debug! macro from the crate's logging setup; place these logs near the
existing let action = ... and right after let resp = ... to improve
observability without changing behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a169b256-8337-4581-8322-e33205db0d28
📒 Files selected for processing (19)
app/src-tauri/src/cdp/input.rsapp/src-tauri/src/gmail/mod.rsapp/src-tauri/src/gmail/reads.rsapp/src-tauri/src/webview_accounts/mod.rsapp/src-tauri/src/whatsapp_scanner/mod.rsapp/src/AppRoutes.tsxapp/src/components/settings/SettingsHome.tsxapp/src/features/privacy/whatLeavesItems.tsapp/src/pages/onboarding/components/BetaBanner.tsxapp/src/pages/onboarding/pages/SkillsPage.test.tsxapp/src/pages/onboarding/steps/ContextGatheringStep.tsxapp/src/pages/onboarding/steps/SkillsStep.tsxapp/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsxsrc/openhuman/agent/prompts/mod.rssrc/openhuman/agent/schemas.rssrc/openhuman/agent/welcome_proactive.rssrc/openhuman/learning/linkedin_enrichment.rssrc/openhuman/learning/schemas.rssrc/openhuman/tools/schemas.rs
✅ Files skipped from review due to trivial changes (1)
- app/src/pages/onboarding/components/BetaBanner.tsx
🚧 Files skipped from review as they are similar to previous changes (3)
- app/src/AppRoutes.tsx
- src/openhuman/agent/prompts/mod.rs
- app/src/pages/onboarding/steps/ContextGatheringStep.tsx
| pub async fn click(cdp: &mut CdpConn, session: &str, x: f64, y: f64) -> Result<(), String> { | ||
| log::debug!("[cdp::input] click session={session} x={x:.1} y={y:.1}"); | ||
| let _ = mouse_event(cdp, session, "mouseMoved", x, y, 0).await?; | ||
| let _ = mouse_event(cdp, session, "mousePressed", x, y, 1).await?; | ||
| let _ = mouse_event(cdp, session, "mouseReleased", x, y, 1).await?; | ||
| log::debug!("[cdp::input] click complete session={session}"); | ||
| Ok(()) | ||
| } | ||
|
|
||
| async fn mouse_event( | ||
| cdp: &mut CdpConn, | ||
| session: &str, | ||
| kind: &str, | ||
| x: f64, | ||
| y: f64, | ||
| click_count: u32, | ||
| ) -> Result<Value, String> { | ||
| log::debug!( | ||
| "[cdp::input] mouse_event session={session} kind={kind} x={x:.1} y={y:.1} clicks={click_count}" | ||
| ); | ||
| cdp.call( | ||
| "Input.dispatchMouseEvent", | ||
| json!({ | ||
| "type": kind, | ||
| "x": x, | ||
| "y": y, | ||
| "button": "left", | ||
| "buttons": if kind == "mousePressed" { 1 } else { 0 }, | ||
| "clickCount": click_count, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchMouseEvent {kind}: {e}")) | ||
| } | ||
|
|
||
| /// Type a literal string by dispatching one `keyDown`/`char`/`keyUp` | ||
| /// triple per character. CDP's `dispatchKeyEvent type=char` is what | ||
| /// actually inserts text into focused editable fields — `keyDown` | ||
| /// alone leaves the input empty for most letters. The `keyDown` | ||
| /// + `keyUp` pair is still needed so listeners (autocomplete, | ||
| /// keystroke counters) see a normal keystroke. | ||
| pub async fn type_text(cdp: &mut CdpConn, session: &str, text: &str) -> Result<(), String> { | ||
| log::debug!( | ||
| "[cdp::input] type_text session={session} chars={}", | ||
| text.chars().count() | ||
| ); | ||
| for ch in text.chars() { | ||
| let s = ch.to_string(); | ||
| // keyDown — Gmail's command/keyboard router observes these. | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "keyDown", | ||
| "text": s, | ||
| "unmodifiedText": s, | ||
| "key": s, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent keyDown {ch:?}: {e}"))?; | ||
| // char — actual text insertion into the focused editable. | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "char", | ||
| "text": s, | ||
| "unmodifiedText": s, | ||
| "key": s, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent char {ch:?}: {e}"))?; | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "keyUp", | ||
| "text": s, | ||
| "unmodifiedText": s, | ||
| "key": s, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent keyUp {ch:?}: {e}"))?; | ||
| } | ||
| log::debug!("[cdp::input] type_text complete session={session}"); | ||
| Ok(()) | ||
| } | ||
|
|
||
| /// Press a non-character key (Enter, Esc, …). Sends `rawKeyDown` → | ||
| /// `keyUp`; no `char` because non-printables don't insert text. | ||
| pub async fn press_key(cdp: &mut CdpConn, session: &str, key: Key) -> Result<(), String> { | ||
| let (key_name, code, vk) = key.cdp_fields(); | ||
| log::debug!("[cdp::input] press_key session={session} key={key_name}"); | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "rawKeyDown", | ||
| "key": key_name, | ||
| "code": code, | ||
| "windowsVirtualKeyCode": vk, | ||
| "nativeVirtualKeyCode": vk, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent rawKeyDown {key_name}: {e}"))?; | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "keyUp", | ||
| "key": key_name, | ||
| "code": code, | ||
| "windowsVirtualKeyCode": vk, | ||
| "nativeVirtualKeyCode": vk, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent keyUp {key_name}: {e}"))?; | ||
| log::debug!("[cdp::input] press_key complete session={session} key={key_name}"); | ||
| Ok(()) | ||
| } | ||
|
|
||
| /// Dispatch Cmd/Ctrl+A to select-all in the focused contenteditable / input. | ||
| /// Useful when the search box already has a previous query in it that | ||
| /// we need to overwrite — Gmail keeps the last query rendered in the | ||
| /// search input so a fresh visit sees stale text. | ||
| pub async fn select_all_in_focused(cdp: &mut CdpConn, session: &str) -> Result<(), String> { | ||
| log::debug!( | ||
| "[cdp::input] select_all_in_focused session={session} modifier={SELECT_ALL_MODIFIER}" | ||
| ); | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "rawKeyDown", | ||
| "key": "a", | ||
| "code": "KeyA", | ||
| "windowsVirtualKeyCode": 65, | ||
| "nativeVirtualKeyCode": 65, | ||
| "modifiers": SELECT_ALL_MODIFIER, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent select-all keyDown: {e}"))?; | ||
| cdp.call( | ||
| "Input.dispatchKeyEvent", | ||
| json!({ | ||
| "type": "keyUp", | ||
| "key": "a", | ||
| "code": "KeyA", | ||
| "windowsVirtualKeyCode": 65, | ||
| "nativeVirtualKeyCode": 65, | ||
| "modifiers": SELECT_ALL_MODIFIER, | ||
| }), | ||
| Some(session), | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Input.dispatchKeyEvent select-all keyUp: {e}"))?; | ||
| log::debug!("[cdp::input] select_all_in_focused complete session={session}"); | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Add unit tests for new CDP input primitives before merge.
This introduces core input behavior but does not include matching Rust unit coverage in the changes shown. Please add tests that assert event order and payload fields for click, type_text, press_key, and select_all_in_focused (including platform modifier behavior).
As per coding guidelines, "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src-tauri/src/cdp/input.rs` around lines 71 - 236, Add unit tests that
mock or stub CdpConn to capture and assert the sequence and payloads of CDP
calls for click (mouseMoved → mousePressed → mouseReleased via mouse_event),
type_text (per-character keyDown → char → keyUp with correct
"text"/"key"/"unmodifiedText"), press_key (rawKeyDown → keyUp with cdp_fields
values), and select_all_in_focused (rawKeyDown/keyUp including
SELECT_ALL_MODIFIER); ensure tests validate order,
"type"/"button"/"clickCount"/"x"/"y"/"key"/"code"/"windowsVirtualKeyCode"/"nativeVirtualKeyCode"/"modifiers"
fields and include platform-specific modifier behavior for SELECT_ALL_MODIFIER
by parameterizing or simulating platforms in the mock CdpConn used by click,
mouse_event, type_text, press_key, and select_all_in_focused.
| let snap = match crate::cdp::Snapshot::capture_with_rects(&mut cdp, &session).await { | ||
| Ok(s) => s, | ||
| Err(e) => { | ||
| log::warn!("[gmail][{account_id}] result snapshot failed: {}", e); | ||
| continue; | ||
| } | ||
| }; | ||
| let rows = collect_row_click_targets(&snap, ROWS_PER_QUERY); | ||
| log::info!( | ||
| "[gmail][{account_id}] find: query {:?} -> {} clickable rows", | ||
| q, | ||
| rows.len() | ||
| ); | ||
| if rows.is_empty() { | ||
| continue; |
There was a problem hiding this comment.
Don't treat an empty row list as a clean miss.
An auth/interstitial page or a slow Gmail rerender also lands here with rows.is_empty(). This branch just continues, so onboarding collapses "Gmail not ready" into Ok(None) and silently skips LinkedIn enrichment. Validate a Gmail-specific ready/no-results marker before treating an empty result table as a benign miss.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src-tauri/src/gmail/reads.rs` around lines 427 - 441, Empty clickable
rows from collect_row_click_targets should not be treated as a benign miss;
instead detect Gmail-specific states (auth/interstitial or transient rerender)
before continuing. Update the logic around
Snapshot::capture_with_rects/collect_row_click_targets: when rows.is_empty()
return, call a new helper (e.g., is_gmail_ready_or_no_results(&snap) or inspect
snap DOM/text for known markers like interstitial/login banners or "No
conversations" / mailbox-empty elements) and only treat it as a true no-results
when that helper confirms the Gmail-ready no-results marker; otherwise retry or
surface an error (do not silently continue). Use symbols
Snapshot::capture_with_rects, collect_row_click_targets, rows.is_empty(), and
account_id to locate and change the branch.
| // 3. Purge redux-persist storage + browser storage. `persistor.purge()` | ||
| // wipes the persisted backend; localStorage/sessionStorage clears | ||
| // everything else (auth flags, theme, etc.). | ||
| try { | ||
| await persistor.purge(); | ||
| } catch (err) { | ||
| console.warn('[Settings] persistor.purge failed:', err); | ||
| setError('Failed to clear persisted app state. Please try again.'); | ||
| return; | ||
| } |
There was a problem hiding this comment.
persistor.purge() failure path can leave the modal stuck in loading state
On Line 53, the function returns without throwing. That bypasses the caller catch, so isLoading can remain true and both modal actions stay disabled after a purge failure.
Proposed fix
} catch (err) {
console.warn('[Settings] persistor.purge failed:', err);
setError('Failed to clear persisted app state. Please try again.');
+ setIsLoading(false);
return;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/settings/SettingsHome.tsx` around lines 45 - 54, The
persistor.purge() catch block returns early without resetting loading state
which can leave the modal stuck; modify the error path for persistor.purge() so
after calling setError(...) you also reset the loading state (e.g., call the
state setter that controls isLoading) and re-enable modal actions, or re-throw
the error so the caller's catch can clear isLoading; update the catch around
persistor.purge() (referencing persistor.purge(), setError and the isLoading
state setter) to ensure isLoading is cleared and the modal becomes interactive
on failure.
| fn handle_spawn_welcome(_params: Map<String, Value>) -> ControllerFuture { | ||
| Box::pin(async move { | ||
| let config = config_rpc::load_config_with_timeout().await?; | ||
| if config.chat_onboarding_completed { | ||
| tracing::info!( | ||
| "[rpc][agent] spawn_welcome skipped — chat_onboarding_completed already true" | ||
| ); | ||
| return RpcOutcome::new( | ||
| serde_json::json!({ "spawned": false }), | ||
| vec!["proactive welcome skipped: chat onboarding already completed".into()], | ||
| ) | ||
| .into_cli_compatible_json(); | ||
| } | ||
| tracing::info!("[rpc][agent] spawn_welcome — firing proactive welcome on detached task"); | ||
| crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome(config); | ||
| RpcOutcome::new( | ||
| serde_json::json!({ "spawned": true }), | ||
| vec!["proactive welcome agent dispatched".into()], | ||
| ) | ||
| .into_cli_compatible_json() | ||
| }) |
There was a problem hiding this comment.
Add behavior tests for spawn_welcome, not just inventory coverage.
The only new coverage here is the schema ordering assertion. There’s still no test for the two important branches in handle_spawn_welcome: skip when spawning should be suppressed, and dispatch when it should proceed. Please add at least a handler-level unit test and the matching JSON-RPC integration coverage for agent.spawn_welcome. As per coding guidelines, "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top" and "Rust code must be unit-tested for new/changed behavior before stacking features; run via cargo test and extend tests/json_rpc_e2e.rs for RPC integration tests".
Also applies to: 437-451
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/agent/schemas.rs` around lines 271 - 291, Add unit and JSON-RPC
integration tests that cover both branches of handle_spawn_welcome: when
config_rpc::load_config_with_timeout yields chat_onboarding_completed = true the
handler should return spawned:false and the skip message, and when false it
should call crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome
and return spawned:true with the dispatch message; write a unit test that
injects/mocks load_config_with_timeout to return the two config variants and
asserts the RpcOutcome JSON from handle_spawn_welcome, and add an e2e test in
tests/json_rpc_e2e.rs that calls the "agent.spawn_welcome" RPC and checks the
response for both cases (for the dispatched case, either stub or spy
spawn_proactive::spawn_proactive_welcome or assert the expected side-effect) so
the behavior is fully covered.
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/pages/onboarding/pages/ChatProviderPage.tsx (1)
14-58:⚠️ Potential issue | 🟠 MajorAdd co-located unit test for this new onboarding page component.
This introduces new completion and error-handling behavior (async
handleFinish, error state rendering, loading state management) that must be covered by tests before shipping. Place the test atapp/src/pages/onboarding/pages/ChatProviderPage.test.tsxusing Vitest, following the patterns in nearby onboarding tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/pages/ChatProviderPage.tsx` around lines 14 - 58, Add a Vitest/React Testing Library unit test file ChatProviderPage.test.tsx that mounts the ChatProviderPage component and mocks useOnboardingContext to cover (1) successful completion: stub completeAndExit to resolve and assert OnboardingNextButton shows loading state while awaiting and that completeAndExit was called and no error text is rendered, and (2) failure path: stub completeAndExit to reject with an Error and assert the error message from the component's error state is rendered and loading is cleared; ensure you mock or stub OnboardingNextButton click behavior (or trigger its onClick prop) and follow existing onboarding test patterns for setup/teardown and async waits.
♻️ Duplicate comments (11)
src/openhuman/integrations/parallel.rs (1)
1001-1009:⚠️ Potential issue | 🟡 MinorAdd assertions for the other newly-defaulted search fields.
This test only verifies
cost_usd. The same change also defaultedtitle,publish_date, andexcerpts, but those defaults are not pinned yet.Suggested test expansion
#[test] fn search_response_defaults_missing_cost_usd_to_zero() { let json = r#"{ "searchId": "s123", - "results": [] + "results": [ + { "url": "https://example.com" } + ] }"#; let resp: SearchResponse = serde_json::from_str(json).unwrap(); assert_eq!(resp.search_id, "s123"); - assert!(resp.results.is_empty()); + assert_eq!(resp.results.len(), 1); assert_eq!(resp.cost_usd, 0.0); + assert_eq!(resp.results[0].title, ""); + assert_eq!(resp.results[0].publish_date, None); + assert!(resp.results[0].excerpts.is_empty()); }As per coding guidelines "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/integrations/parallel.rs` around lines 1001 - 1009, The test function search_response_defaults_missing_cost_usd_to_zero only asserts cost_usd; update it to also assert the newly-defaulted fields on SearchResponse: verify resp.title equals the expected default (e.g., empty string ""), resp.publish_date equals the expected default (e.g., None or an empty string depending on the SearchResponse type), and resp.excerpts is the default empty collection (e.g., an empty vec). Locate these fields on the SearchResponse struct and add the corresponding assert_eq! / assert! checks alongside the existing assertions so the test pins title, publish_date, excerpts, and cost_usd defaults.app/src/components/settings/SettingsHome.tsx (1)
50-54:⚠️ Potential issue | 🟠 Major
persistor.purge()failure path can still leave the modal stuck in loading state.On Line 53,
returnexitsclearAllAppData()without throwing, sohandleLogoutAndClearData()does not enter itscatchandisLoadingmay staytrue.Suggested fix
const clearAllAppData = async () => { @@ try { await persistor.purge(); } catch (err) { console.warn('[Settings] persistor.purge failed:', err); - setError('Failed to clear persisted app state. Please try again.'); - return; + throw new Error('Failed to clear persisted app state. Please try again.'); } @@ const handleLogoutAndClearData = async () => { try { setIsLoading(true); setError(null); await clearAllAppData(); // This will redirect to login - } catch (_error) { - setError('Failed to clear data and logout. Please try again.'); - setIsLoading(false); + } catch (error) { + setError(error instanceof Error ? error.message : 'Failed to clear data and logout. Please try again.'); + } finally { + setIsLoading(false); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/settings/SettingsHome.tsx` around lines 50 - 54, The catch in clearAllAppData swallowing persistor.purge() errors causes handleLogoutAndClearData() not to hit its outer catch and can leave isLoading true; change the error path in clearAllAppData() to either rethrow the error after calling setError(...) or explicitly clear the loading state (e.g. call setIsLoading(false)) before returning so handleLogoutAndClearData() can properly finish and isLoading is reset; locate the persistor.purge() catch inside clearAllAppData() and update it to rethrow the caught err (or call setIsLoading(false)) so the outer handler in handleLogoutAndClearData() can run.src/openhuman/agent/schemas.rs (2)
437-451:⚠️ Potential issue | 🟡 MinorAdd behavior tests for
handle_spawn_welcomebranches.The current test update only asserts schema inventory order. Please add handler-level and JSON-RPC coverage for both outcomes: skip (
spawned: false) and dispatch (spawned: true).As per coding guidelines, "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top" and "Rust code must be unit-tested for new/changed behavior before stacking features; run via
cargo testand extendtests/json_rpc_e2e.rsfor RPC integration tests".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/schemas.rs` around lines 437 - 451, Add unit and JSON-RPC tests to cover both branches of handle_spawn_welcome: a unit test that calls handle_spawn_welcome (or the internal function it delegates to) to assert returned payloads for spawned: false and spawned: true, and update tests/json_rpc_e2e.rs to exercise the "spawn_welcome" JSON-RPC method end-to-end for both outcomes, asserting the RPC response body contains spawned: false in the skip path and spawned: true plus the dispatched message in the dispatch path; locate the handler via handle_spawn_welcome and the RPC entrypoint that maps the "spawn_welcome" controller schema (see all_controller_schemas / controller_schema_inventory_is_stable) and reuse existing test harness patterns for invoking JSON-RPC requests and inspecting responses.
273-285:⚠️ Potential issue | 🟠 MajorGate
spawn_welcomeononboarding_completedbefore dispatching.Line 273-Line 285 currently allows proactive spawn whenever
chat_onboarding_completedis false, even if the UI wizard is not completed. That breaks the renderer-ready contract and can trigger premature proactive messages.🛠️ Suggested guard
fn handle_spawn_welcome(_params: Map<String, Value>) -> ControllerFuture { Box::pin(async move { let config = config_rpc::load_config_with_timeout().await?; + if !config.onboarding_completed { + tracing::info!( + "[rpc][agent] spawn_welcome skipped — onboarding_completed is false" + ); + return RpcOutcome::new( + serde_json::json!({ "spawned": false }), + vec!["proactive welcome skipped: onboarding wizard not completed".into()], + ) + .into_cli_compatible_json(); + } if config.chat_onboarding_completed { tracing::info!( "[rpc][agent] spawn_welcome skipped — chat_onboarding_completed already true" );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/schemas.rs` around lines 273 - 285, The current logic only skips proactive welcome when config.chat_onboarding_completed is true, allowing spawn even if the UI wizard (config.onboarding_completed) is incomplete; update the guard so you only call crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome(config) when both chat_onboarding_completed is false AND onboarding_completed is true. Concretely, change the conditional around config to check config.chat_onboarding_completed || !config.onboarding_completed and return the same RpcOutcome (with an explanatory message like "proactive welcome skipped: onboarding not completed") when either is true, otherwise proceed to spawn_proactive_welcome.src/openhuman/agent/welcome_proactive.rs (1)
231-233:⚠️ Potential issue | 🟠 MajorDon’t rely on prompt text alone to block disallowed tool calls.
Line 231-Line 233 still rely on instruction-only policy (
check_onboarding_statusonly). The proactive runner should enforce an allowlist in code, otherwisecomplete_onboarding(or others) can still execute if the model drifts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/welcome_proactive.rs` around lines 231 - 233, The proactive runner currently trusts the model's instruction-only prompt to limit tool usage; update the tool-dispatch logic in the proactive runner (the code that invokes tools from welcome_proactive.rs) to enforce an explicit allowlist: accept only the exact tool name "check_onboarding_status" with no args and reject any other tool call (including "complete_onboarding") before sending or executing it; implement a guard in the function that parses/dispatches model tool requests (e.g., wherever tool names are matched/invoked) to validate the tool name and arguments, log and return an error if the request is not exactly the allowed call, and ensure no downstream code path can bypass this check.src/openhuman/agent/agents/welcome/prompt.md (1)
7-8:⚠️ Potential issue | 🟠 MajorAvoid location-level profile disclosure in opener guidance.
Line 7 and Line 46 currently nudge the model to mention where the user is/lives. That can surface imported location PII in unsolicited onboarding messages. Please constrain opener personalization to low-sensitivity traits unless the user already brought up location.
🔧 Suggested prompt tweak
-If a `### PROFILE.md` block is present in this prompt, **use it**. That's a real summary of who they are: name, role, location, LinkedIn. Open by referencing one specific thing (their name, what they do, where they are). Don't list facts back at them; just sound like you've read it. +If a `### PROFILE.md` block is present in this prompt, **use it**. That's a real summary of who they are: name, role, interests, LinkedIn. Open by referencing one specific low-sensitivity thing (their name, what they do, or an interest). Don't list facts back at them; just sound like you've read it. Avoid location/personal details unless they bring them up first. @@ -**Voice for this opener: long-lost friend.** Warm, familiar, like you're picking up a thread you'd left off, not meeting them. Not formal. Sound a little excited to see them. Reference something specific from PROFILE.md (their work, where they live, something they're into) the way a friend would mention it casually, not the way a CRM would log it. +**Voice for this opener: long-lost friend.** Warm, familiar, like you're picking up a thread you'd left off, not meeting them. Not formal. Sound a little excited to see them. Reference something specific from PROFILE.md (their work or something they're into) the way a friend would mention it casually, not the way a CRM would log it. Do not surface location or sensitive details unless the user already brought them up.Also applies to: 46-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/agents/welcome/prompt.md` around lines 7 - 8, The opener guidance in the prompt that instructs "Open by referencing one specific thing (their name, what they do, where they are)" and the phrase about using a `### PROFILE.md` block should be updated to avoid suggesting location-level references; change the examples/guide to only recommend low-sensitivity traits (e.g., name, role, professional headline, or LinkedIn if user provided) and remove or strike the clause "where they are" so the model does not surface location PII in unsolicited messages; ensure the `### PROFILE.md` instruction still encourages referencing a single specific item but explicitly prohibits mentioning location unless the user has already disclosed it.app/src/pages/Conversations.tsx (1)
1290-1293:⚠️ Potential issue | 🟡 MinorHide the entire billing affordance block during welcome lock, not just pills.
Line 1290-Line 1293 hides only the quota pills row. The near-limit upsell and budget/rate-limit banner above still render when
welcomeLockedis true, so onboarding can still surface billing CTAs.🧩 Suggested structure
- {isNearLimit && - !isAtLimit && - isFreeTier && - shouldShowBanner('conversations-warning', 24 * 60 * 60 * 1000) && ( - <div className="mb-3"> - <UpsellBanner ... /> - </div> - )} - {teamUsage && (shouldShowBudgetCompletedMessage || isRateLimited) && ( - <div className="mb-3 p-3 rounded-xl bg-coral-50 border border-coral-200 ..."> - ... - </div> - )} - <div className={`flex items-center justify-end gap-2 mb-2 ${welcomeLocked ? 'hidden' : ''}`}> - ... - </div> + {!welcomeLocked && ( + <> + {isNearLimit && + !isAtLimit && + isFreeTier && + shouldShowBanner('conversations-warning', 24 * 60 * 60 * 1000) && ( + <div className="mb-3"> + <UpsellBanner ... /> + </div> + )} + {teamUsage && (shouldShowBudgetCompletedMessage || isRateLimited) && ( + <div className="mb-3 p-3 rounded-xl bg-coral-50 border border-coral-200 ..."> + ... + </div> + )} + <div className="flex items-center justify-end gap-2 mb-2"> + ... + </div> + </> + )}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Conversations.tsx` around lines 1290 - 1293, The billing affordance group (quota/usage pills, near-limit upsell, and budget/rate-limit banner) should be fully hidden when welcomeLocked is true; currently only the quota pills div uses welcomeLocked. Locate the JSX block that renders the quota/usage pills div plus the nearby near-limit upsell and budget/rate-limit banner (the elements referencing welcomeLocked and the quota/UI affordances) and either wrap that entire group in a conditional like {!welcomeLocked && ( ... )} or move the className `${welcomeLocked ? 'hidden' : ''}` onto the parent container that encloses all those billing affordances so the whole block is hidden during welcome lock.app/src-tauri/src/gmail/reads.rs (3)
126-130:⚠️ Potential issue | 🟠 MajorStop logging the raw Gmail search query.
queryis still emitted verbatim at info level in both places. It is caller-controlled and can contain names, emails, or other mailbox search terms.As per coding guidelines, "Never log secrets, raw JWTs, API keys, credentials, or full PII; redact or omit sensitive fields".
Also applies to: 197-201
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/gmail/reads.rs` around lines 126 - 130, The log statement is emitting caller-controlled raw Gmail search `query` (variable `query`) at info level; replace it with a non-sensitive placeholder or redacted form (e.g., log that a query was provided or log a hashed/length-limited summary) in the `log::info!` call that currently uses `"[gmail][{account_id}] search query={:?} limit={}"`, and make the same change for the other `log::info!` occurrence around lines 197-201 so no full PII or raw search terms are written to logs; keep `account_id` and `limit` as before but do not include the verbatim `query`.
427-441:⚠️ Potential issue | 🟠 MajorDon't collapse an empty clickable-row list into a clean miss.
rows.is_empty()still covers auth/interstitial pages and transient Gmail rerenders. Thiscontinuepath silently downgrades "Gmail not ready" intoOk(None), so onboarding can skip LinkedIn enrichment without surfacing the real failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/gmail/reads.rs` around lines 427 - 441, The empty clickable-rows case currently does a silent continue which collapses auth/interstitial or transient Gmail states into a clean miss; instead, in the block handling rows.is_empty() (after collect_row_click_targets and Snapshot::capture_with_rects) do not just continue—detect and surface a "Gmail not ready" condition by returning or propagating an explicit error/enum (e.g., GmailNotReady or TransientRender) from the surrounding function (preserving account_id and q context in the log), or trigger a retry path, so onboarding and callers can distinguish a real miss from an auth/interstitial/transient failure.
553-561:⚠️ Potential issue | 🟠 MajorParse Google redirect URLs robustly.
This still only works when
qis the first query param and the value uses uppercase escapes. Wrapped URLs like...?hl=en&q=https%3a%2f%2fwww.linkedin.com%2fin%2f...will still be missed.As per coding guidelines, "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/gmail/reads.rs` around lines 553 - 561, The current logic naively slices href at "?q=" and only decodes uppercase % escapes, which misses q when it's not first or when percent-escapes are lowercase; update the parsing to robustly extract the "q" query parameter from href (not assuming position) by parsing the query string (e.g., using a form-urlencoded parser like url::form_urlencoded::parse or equivalent) and then percent-decode the returned value using a proper percent-decoder (e.g., percent_encoding::percent_decode_str or urlencoding::decode) before calling extract_linkedin_url; replace the dest_raw/dest manual replace logic with this query-parse + decode flow so extract_linkedin_url(&decoded_q) reliably sees LinkedIn URLs regardless of param order or case.app/src-tauri/src/cdp/input.rs (1)
71-236:⚠️ Potential issue | 🟠 MajorPlease land the missing CDP input tests before relying on these helpers.
There’s still no matching Rust unit coverage here for event ordering and payload fields across
click,type_text,press_key, andselect_all_in_focused. For code that drives Gmail through raw CDP events, that leaves subtle regressions detectable only in live flows. As per coding guidelines, "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/cdp/input.rs` around lines 71 - 236, Add unit tests covering CDP input helpers to assert correct event ordering and payloads: write tests for click (mouseMoved → mousePressed → mouseReleased) by mocking CdpConn to record calls to Input.dispatchMouseEvent and verify "type", "x", "y", "button", "buttons", and "clickCount"; tests for type_text to assert that for each character the sequence is keyDown → char → keyUp with "text"/"unmodifiedText"/"key" fields set; tests for press_key to assert rawKeyDown then keyUp with fields from Key::cdp_fields() ("key", "code", "windowsVirtualKeyCode", "nativeVirtualKeyCode"); and tests for select_all_in_focused to assert rawKeyDown then keyUp with "key" "a"/"code" "KeyA" and modifiers == SELECT_ALL_MODIFIER. Mock or stub CdpConn.call to capture method names and JSON params, and add assertions on ordering and exact payload values for each helper (functions: click, mouse_event, type_text, press_key, select_all_in_focused).
🧹 Nitpick comments (8)
app/src/pages/onboarding/components/WebviewLoginModal.tsx (2)
149-170: Consider redacting URL in load event logging.The
webview-account:loadevent contains the full URL (not redacted by the Rust side), which may include OAuth callback params. Whileconsole.debugisn't persisted, the guideline prefers not logging potentially sensitive data.Optional: redact URL in debug log
const onLoad = await listen<LoadPayload>('webview-account:load', evt => { const payload = evt.payload; if (!payload || payload.account_id !== accountId) return; + // Strip query params for logging (may contain auth tokens) + const safeUrl = payload.url?.split('?')[0]; console.debug('[onboarding:webview-login] load event', { accountId, - url: payload.url, + url: safeUrl, state: payload.state, }); if (matchesLoggedIn(payload.url)) setAutoDetected(true); });As per coding guidelines "never log secrets or full PII — redact".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/components/WebviewLoginModal.tsx` around lines 149 - 170, The debug log in the onLoad handler prints the full payload.url which may contain sensitive OAuth params; update the onLoad console.debug (and optionally the onNavigate one) to avoid logging full URLs by redacting query/fragment or replacing payload.url with a redactedUrl (e.g., strip search/hash or log only origin and pathname) before passing it to console.debug; locate the listeners created with listen<'webview-account:load'|'webview-account:navigate'>, the handlers that reference payload.url, and use a sanitized value instead of payload.url while keeping matchesLoggedIn(payload.url) unchanged and still calling setAutoDetected(true) when appropriate.
61-248: Missing unit tests forWebviewLoginModalcomponent.This new component handles login flow orchestration, account lifecycle, and Tauri event subscriptions. Per coding guidelines, unit tests should be shipped with new behavior. Key test scenarios:
- Account creation on mount, cleanup on unmount (purge vs keep-alive paths)
onConnectedcallback fired whenautoDetectedbecomes true- Manual "I'm already signed in" button triggers
onConnected- Listener cleanup handles StrictMode double-mount without errors
Consider adding a test file at
app/src/pages/onboarding/components/WebviewLoginModal.test.tsx. Given the Tauri/Redux dependencies, you may need to mock:
@tauri-apps/api/event(mocklistento return controllable event emitters)purgeWebviewAccountservice- Redux store with test providers
As per coding guidelines "Ship unit tests and coverage for behavior you are adding or changing before building additional features on top".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/components/WebviewLoginModal.tsx` around lines 61 - 248, Add unit tests for the WebviewLoginModal component: create app/src/pages/onboarding/components/WebviewLoginModal.test.tsx that mounts/unmounts WebviewLoginModal and verifies (1) dispatch of addAccount on mount and removeAccount/purgeWebviewAccount behavior on unmount for both purge and keepAliveOnConnected paths, (2) onConnected is called when the component sets autoDetected (simulate listen events that yield URLs matching LOGGED_IN_URL_PREFIX for the provider), (3) clicking the "I'm already signed in" button sets connectedRef and calls onConnected with the generated accountId, and (4) listener cleanup is safe under React StrictMode double-mount by mocking `@tauri-apps/api/event.listen` to return controllable unlisten functions that may return a Promise or throw; mock purgeWebviewAccount and the Redux store (or use a test provider) and assert safeUnlisten behavior and that no unhandled promise rejects occur.app/src/pages/Accounts.tsx (1)
5-5: Prefer removing disabled feature code instead of commenting it out.This leaves three disconnected commented blocks for one temporary toggle. It’s cleaner and safer to remove them and rely on git history (or a small feature flag) to reintroduce later.
Suggested cleanup
-// import RespondQueuePanel from '../components/accounts/RespondQueuePanel'; @@ - // Respond-queue selectors disabled while RespondQueuePanel is hidden. - // const respondQueue = useAppSelector(state => state.providerSurfaces.queue); - // const respondQueueCount = useAppSelector(state => state.providerSurfaces.count); - // const respondQueueStatus = useAppSelector(state => state.providerSurfaces.status); - // const respondQueueError = useAppSelector(state => state.providerSurfaces.error); @@ - {/* Respond queue side panel hidden for now — bring back when - the cross-provider surface is ready to ship. */} - {/* <RespondQueuePanel - items={respondQueue} - count={respondQueueCount} - status={respondQueueStatus} - error={respondQueueError} - onRefresh={() => { - void dispatch(fetchRespondQueue()); - }} - /> */}Also applies to: 84-88, 252-262
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Accounts.tsx` at line 5, Remove the commented-out RespondQueuePanel import and the other commented blocks in Accounts.tsx (the temporary-disabled UI code) instead of leaving them in place; locate the commented import for RespondQueuePanel and the commented JSX blocks around the accounts component (the sections you noted previously) and delete those comment blocks, or replace them with a small feature flag/prop (e.g., renderRespondQueue boolean) if you need a runtime toggle, relying on git history to restore the code when needed.app/src/pages/Conversations.tsx (1)
886-907: Remove redundant!welcomeLockedchecks inside an already-guarded header.The header block is already gated by
!welcomeLockedat Line 886, so nested checks at Line 888 and Line 907 are dead conditions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/Conversations.tsx` around lines 886 - 907, The header rendering is already guarded by the outer condition (!isSidebar && !welcomeLocked), so remove the redundant inner checks that re-evaluate !welcomeLocked around the button and the trailing block; locate the conditional wrapper that renders the header (uses isSidebar and welcomeLocked) and delete the inner duplicated "!welcomeLocked && (...)" wrappers around the menu button (onClick => setShowSidebar) and the trailing block that currently repeats the guard, leaving the button, h3 (threads.find(...).title), TokenUsagePill, and other children directly rendered inside the outer block.app/src-tauri/src/whatsapp_scanner/mod.rs (1)
37-39: Prefer a single source of truth for the CDP port.Hardcoding
19222here and separately in other CDP modules is easy to desynchronize on future changes. Please route this through one shared constant (or config accessor) to avoid another silent scanner break.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src-tauri/src/whatsapp_scanner/mod.rs` around lines 37 - 39, Replace the local CDP_PORT constant in whatsapp_scanner (const CDP_PORT: u16 = 19222) with a reference to the single shared constant/config used by other CDP modules (e.g., use the existing cdp::CDP_PORT or the shared constant in lib.rs) so there is one source of truth; remove the duplicate hardcoded 19222 and update any uses in whatsapp_scanner::... to import and use the shared symbol instead.app/src/pages/onboarding/OnboardingLayout.tsx (2)
31-39: Consider wrappingsetOnboardingTasksfor consistent error handling.Unlike
setOnboardingCompletedFlag(lines 47-52),setOnboardingTaskshas no try/catch. If it fails, callers receive an unadorned rejection. This is acceptable if intentional, but the inconsistency makes the error contract unclear.♻️ Optional: Add consistent error handling
+ try { await setOnboardingTasks({ accessibilityPermissionGranted: snapshot.localState.onboardingTasks?.accessibilityPermissionGranted ?? false, localModelConsentGiven: false, localModelDownloadStarted: false, enabledTools: getDefaultEnabledTools(), connectedSources: draft.connectedSources, updatedAtMs: Date.now(), }); + } catch (e) { + console.error('[onboarding] Failed to persist onboarding tasks', e); + throw e; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/OnboardingLayout.tsx` around lines 31 - 39, Wrap the call to setOnboardingTasks in a try/catch to match the error-handling pattern used by setOnboardingCompletedFlag: catch any thrown error when calling setOnboardingTasks, log a descriptive error message including the caught error (so debugging info is preserved), and either rethrow or return a controlled failure value consistent with the surrounding flow; update the block around setOnboardingTasks to mirror the error handling approach used in setOnboardingCompletedFlag so callers see a consistent contract.
67-67: Dependency array includessnapshot— verify for stale closure risk.
snapshotis accessed forsnapshot.localState.onboardingTasks?.accessibilityPermissionGranted. Ifsnapshotchanges mid-flow,completeAndExitmay capture a stale reference. This is likely benign since permission grants don't typically change during onboarding, but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/OnboardingLayout.tsx` at line 67, The dependency array for the effect that defines completeAndExit currently includes the whole snapshot object which can cause stale-closure or unnecessary re-runs; change it to depend only on the specific value used (snapshot.localState.onboardingTasks?.accessibilityPermissionGranted) or read that boolean inside completeAndExit from a stable ref at call time. Concretely, in OnboardingLayout.tsx locate completeAndExit and replace snapshot in the dependency array with a derived constant like accessibilityPermissionGranted = snapshot.localState.onboardingTasks?.accessibilityPermissionGranted (and use that in the deps) or maintain a snapshotRef that you update and read from inside completeAndExit so the function always sees the latest onboardingTasks.accessibilityPermissionGranted without depending on the entire snapshot object.app/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsx (1)
10-53: Consider adding a test fortools_composio_executefailure.The tests cover missing LinkedIn URL and save-profile errors, but not the case where the initial Gmail lookup (
tools_composio_execute) itself fails. This would validate the first-stage error handling.🧪 Suggested test case
it('surfaces error when Gmail lookup fails', async () => { callCoreRpc.mockRejectedValueOnce(new Error('Gmail session expired')); renderWithProviders( <ContextGatheringStep connectedSources={['composio:gmail']} onNext={() => Promise.resolve()} /> ); fireEvent.click(screen.getByRole('button', { name: "Let's go!" })); await waitFor(() => { expect(screen.getByText('Gmail session expired')).toBeInTheDocument(); }); // Should still allow user to continue or retry expect(screen.getByRole('button', { name: 'Continue' })).toBeInTheDocument(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsx` around lines 10 - 53, Add a test in ContextGatheringStep.test.tsx that verifies a failing tools_composio_execute (mocked via callCoreRpc.mockRejectedValueOnce(new Error('Gmail session expired'))) surfaces the error and keeps the UI actionable: render <ContextGatheringStep connectedSources={['composio:gmail']} ...>, click the "Let's go!" button, await the error text 'Gmail session expired' to appear, and assert the 'Continue' button is still present so the user can retry or proceed; use the existing callCoreRpc mock and the test helpers (renderWithProviders, fireEvent, waitFor) to implement this.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src-tauri/src/cdp/mod.rs`:
- Around line 29-35: The whatsapp_scanner module defines its own CDP_PORT
constant which can drift from the canonical cdp::CDP_PORT; remove the local
CDP_PORT declaration in whatsapp_scanner/mod.rs and import the canonical
constant from the cdp module (e.g., use crate::cdp::CDP_PORT or equivalent)
wherever the local constant was referenced so the scanner and session opener
share the same single source of truth.
In `@app/src-tauri/src/gmail/reads.rs`:
- Around line 482-485: The log statement in find_linkedin_profile_url currently
prints the full PII value in the variable found; change the log to avoid raw
URLs by replacing found in the log::info! call with either found.is_some() or a
redacted/hashed representation (e.g., map found to a placeholder like
"<redacted>" or a truncated/hash value) so that account_id remains but the
actual LinkedIn URL is not emitted; update the log::info! invocation accordingly
to reference account_id and the safe form of found.
In `@app/src-tauri/src/lib.rs`:
- Around line 354-366: The current post request handling only checks HTTP status
(resp.status()) but must also parse the JSON-RPC body and treat any JSON-RPC
"error" as a failure; after receiving resp from client.post(...).send().await
(the block that produces resp and status), parse the body into JSON (e.g.,
serde_json::Value via resp.json().await or resp.text().await then parse) and
inspect the JSON-RPC fields: if there is an "error" field (or no valid "result"
when expected) return Err with a message including that error payload; only
return Ok(()) when the HTTP status is success and the parsed JSON-RPC indicates
no error. Ensure to preserve existing map_err formatting for the request failure
path.
In `@app/src/pages/Accounts.tsx`:
- Around line 252-262: Add unit tests for the new RespondQueuePanel component by
creating app/src/components/accounts/RespondQueuePanel.test.tsx using Vitest +
`@testing-library/react`: write tests that render RespondQueuePanel with props
(items, count, status, error) and assert the queue title and pending count
appear, that each item in the items prop is displayed, that clicking the refresh
button calls the onRefresh prop (use a jest/vitest mock), and that different
status/error prop values render their respective UI states; reference the
RespondQueuePanel component and its onRefresh prop to locate the component and
ensure tests cover render, interaction, and status/error branches.
In `@app/src/pages/onboarding/pages/ChatProviderPage.tsx`:
- Around line 19-27: The handleFinish function can leave loading stuck true when
completeAndExit resolves but navigation delays; update handleFinish (the async
function using setLoading, setError, and completeAndExit) to ensure
setLoading(false) runs in a finally block so loading is always cleared
regardless of success or error, and remove the standalone setLoading(false) from
the catch so the finally is the single place that resets loading.
In `@app/src/pages/onboarding/pages/ContextPage.tsx`:
- Line 15: The onNext handler currently invokes completeAndExit() without
awaiting or handling rejections; update the onNext in ContextPage.tsx to call
completeAndExit with proper async error handling (e.g., make the arrow function
async, await completeAndExit(), and wrap it in try/catch), and inside the catch
surface an error to the user or call the existing error handling path used for
setOnboardingCompletedFlag (referencing the failure source in
OnboardingLayout.tsx where setOnboardingCompletedFlag may throw) so navigation
failures are not silent.
In `@app/src/pages/onboarding/pages/SkillsPage.test.tsx`:
- Around line 16-18: The test mock is using the wrong hook name: replace the
mocked export useOnboardingContext with the real exported hook useOnboarding so
SkillsPage receives the mocked implementation; update the mock in
SkillsPage.test.tsx to return { setDraft: setDraftMock, completeAndExit:
completeAndExitMock } for the useOnboarding export (keep the module path the
same) so the component uses the mocked hook during tests.
- Around line 31-64: Reset the hoisted mocks between tests so call history
doesn't leak across the two it blocks: add a beforeEach/afterEach that clears or
resets navigateMock, setDraftMock, and completeAndExitMock (e.g., call
.mockClear() or .mockReset() on each) in the SkillsPage.test.tsx test suite so
assertions like toHaveBeenCalledTimes(1) and
not.toHaveBeenCalledWith('/onboarding/context') only reflect the current test.
In `@app/src/pages/onboarding/steps/ContextGatheringStep.tsx`:
- Around line 273-277: The privacy text in ContextGatheringStep currently
over-promises by stating “Everything happens in your device itself” while the
step invokes openhuman.tools_composio_execute and
openhuman.tools_apify_linkedin_scrape; update the copy (where hasGmail renders
the paragraph) to say that orchestration and profile-building run locally on the
device but external lookups (e.g., Gmail/LinkedIn scraping via
openhuman.tools_composio_execute and openhuman.tools_apify_linkedin_scrape) may
call network-backed services, so user data is processed locally where possible
but some lookups use remote services.
- Around line 129-137: The debug statement in apifyScrapeLinkedIn currently logs
the full PII profileUrl; change the logging to avoid raw PII by logging a
redacted or hashed form (or a boolean indicating presence) instead. Locate
apifyScrapeLinkedIn and replace the console.debug call that includes profileUrl
with a log that either logs { hasProfileUrl: !!profileUrl } or a deterministic
redaction (e.g., hash or mask) of profileUrl so no full URL is written to logs
while keeping context for debugging.
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 237-240: The proactive prompt contains contradictory message-count
constraints: the JSON example shows two messages ({"messages":["<m1>","<m2>"]})
while the next line states "Maximum 3."; update the prompt in
welcome_proactive.rs so the constraints match—either make the JSON and text both
require exactly 2 messages (change "Maximum 3." to "Exactly 2.") or allow 2–3
messages by editing the JSON example to include a 2–3 message description (e.g.,
indicate the array can contain 2–3 entries) and change "- 2 messages." to "- 2–3
messages."; ensure the final assistant instruction and the JSON example are
consistent.
In `@src/openhuman/learning/linkedin_enrichment.rs`:
- Around line 94-115: The current guard treats any existing PROFILE.md at
config.workspace_dir as a successful enrichment and skips stages, which prevents
retries when a fallback wrote a URL-only placeholder; change the check in the
profile_path short-circuit to verify that PROFILE.md is a full/successful output
(e.g., by validating file contents/size or checking for a persisted success
marker) before skipping: update the logic around profile_path.is_file() where
PROFILE.md presence is inspected and only push Skipped EnrichmentStage entries
and return Ok(result) if the file passes the “full profile” test (otherwise
allow pipeline stages "gmail-search", "apify-scrape", "build-profile" to run);
alternatively write and check a separate marker alongside PROFILE.md to
distinguish fallback placeholders from completed enrichments so retries are not
suppressed.
In `@src/openhuman/learning/schemas.rs`:
- Around line 169-172: The code currently maps an empty "profile_url" into
Some("") which bypasses Gmail discovery and can pass an invalid URL into
run_linkedin_enrichment; change the extraction for preset_profile_url (the
params.get("profile_url") chain) to trim the string and return None for
empty/whitespace-only values (same approach as handle_apify_linkedin_scrape) so
only non-empty trimmed URLs become Some(String) and empties become None.
In `@src/openhuman/tools/schemas.rs`:
- Around line 297-343: The new tests only validate schema metadata; add
handler-level unit tests and JSON-RPC e2e coverage for the three new controllers
(tools_apify_linkedin_scrape, tools_composio_execute, tools_web_search): write
unit tests that call each controller's handler functions (the request/handler
entrypoints that perform validation and auth) to assert validation failure cases
and auth failure paths, and extend tests/json_rpc_e2e.rs to send real JSON-RPC
requests for each controller verifying (a) validation error responses for
malformed/missing inputs, (b) auth/permission denial responses, and (c)
successful response payload shapes; run cargo test to ensure all new tests pass.
- Around line 16-295: The current src/openhuman/tools/schemas.rs mixes schema
metadata with full handler implementations and lacks the canonical schemas()
entrypoint; extract the async handler bodies from handle_composio_execute,
handle_web_search, and handle_apify_linkedin_scrape into a new
src/openhuman/tools/rpc.rs as functions (same names), keep schemas.rs containing
only tools_schemas, all_controller_schemas, all_registered_controllers and add a
public schemas() entrypoint that returns the domain ControllerSchema list, and
modify the handle_* stubs in schemas.rs to delegate to the implementations in
tools::rpc (i.e., call the rpc::handle_composio_execute, rpc::handle_web_search,
rpc::handle_apify_linkedin_scrape) so the controller-registry pattern and
exports remain consistent.
---
Outside diff comments:
In `@app/src/pages/onboarding/pages/ChatProviderPage.tsx`:
- Around line 14-58: Add a Vitest/React Testing Library unit test file
ChatProviderPage.test.tsx that mounts the ChatProviderPage component and mocks
useOnboardingContext to cover (1) successful completion: stub completeAndExit to
resolve and assert OnboardingNextButton shows loading state while awaiting and
that completeAndExit was called and no error text is rendered, and (2) failure
path: stub completeAndExit to reject with an Error and assert the error message
from the component's error state is rendered and loading is cleared; ensure you
mock or stub OnboardingNextButton click behavior (or trigger its onClick prop)
and follow existing onboarding test patterns for setup/teardown and async waits.
---
Duplicate comments:
In `@app/src-tauri/src/cdp/input.rs`:
- Around line 71-236: Add unit tests covering CDP input helpers to assert
correct event ordering and payloads: write tests for click (mouseMoved →
mousePressed → mouseReleased) by mocking CdpConn to record calls to
Input.dispatchMouseEvent and verify "type", "x", "y", "button", "buttons", and
"clickCount"; tests for type_text to assert that for each character the sequence
is keyDown → char → keyUp with "text"/"unmodifiedText"/"key" fields set; tests
for press_key to assert rawKeyDown then keyUp with fields from Key::cdp_fields()
("key", "code", "windowsVirtualKeyCode", "nativeVirtualKeyCode"); and tests for
select_all_in_focused to assert rawKeyDown then keyUp with "key" "a"/"code"
"KeyA" and modifiers == SELECT_ALL_MODIFIER. Mock or stub CdpConn.call to
capture method names and JSON params, and add assertions on ordering and exact
payload values for each helper (functions: click, mouse_event, type_text,
press_key, select_all_in_focused).
In `@app/src-tauri/src/gmail/reads.rs`:
- Around line 126-130: The log statement is emitting caller-controlled raw Gmail
search `query` (variable `query`) at info level; replace it with a non-sensitive
placeholder or redacted form (e.g., log that a query was provided or log a
hashed/length-limited summary) in the `log::info!` call that currently uses
`"[gmail][{account_id}] search query={:?} limit={}"`, and make the same change
for the other `log::info!` occurrence around lines 197-201 so no full PII or raw
search terms are written to logs; keep `account_id` and `limit` as before but do
not include the verbatim `query`.
- Around line 427-441: The empty clickable-rows case currently does a silent
continue which collapses auth/interstitial or transient Gmail states into a
clean miss; instead, in the block handling rows.is_empty() (after
collect_row_click_targets and Snapshot::capture_with_rects) do not just
continue—detect and surface a "Gmail not ready" condition by returning or
propagating an explicit error/enum (e.g., GmailNotReady or TransientRender) from
the surrounding function (preserving account_id and q context in the log), or
trigger a retry path, so onboarding and callers can distinguish a real miss from
an auth/interstitial/transient failure.
- Around line 553-561: The current logic naively slices href at "?q=" and only
decodes uppercase % escapes, which misses q when it's not first or when
percent-escapes are lowercase; update the parsing to robustly extract the "q"
query parameter from href (not assuming position) by parsing the query string
(e.g., using a form-urlencoded parser like url::form_urlencoded::parse or
equivalent) and then percent-decode the returned value using a proper
percent-decoder (e.g., percent_encoding::percent_decode_str or
urlencoding::decode) before calling extract_linkedin_url; replace the
dest_raw/dest manual replace logic with this query-parse + decode flow so
extract_linkedin_url(&decoded_q) reliably sees LinkedIn URLs regardless of param
order or case.
In `@app/src/components/settings/SettingsHome.tsx`:
- Around line 50-54: The catch in clearAllAppData swallowing persistor.purge()
errors causes handleLogoutAndClearData() not to hit its outer catch and can
leave isLoading true; change the error path in clearAllAppData() to either
rethrow the error after calling setError(...) or explicitly clear the loading
state (e.g. call setIsLoading(false)) before returning so
handleLogoutAndClearData() can properly finish and isLoading is reset; locate
the persistor.purge() catch inside clearAllAppData() and update it to rethrow
the caught err (or call setIsLoading(false)) so the outer handler in
handleLogoutAndClearData() can run.
In `@app/src/pages/Conversations.tsx`:
- Around line 1290-1293: The billing affordance group (quota/usage pills,
near-limit upsell, and budget/rate-limit banner) should be fully hidden when
welcomeLocked is true; currently only the quota pills div uses welcomeLocked.
Locate the JSX block that renders the quota/usage pills div plus the nearby
near-limit upsell and budget/rate-limit banner (the elements referencing
welcomeLocked and the quota/UI affordances) and either wrap that entire group in
a conditional like {!welcomeLocked && ( ... )} or move the className
`${welcomeLocked ? 'hidden' : ''}` onto the parent container that encloses all
those billing affordances so the whole block is hidden during welcome lock.
In `@src/openhuman/agent/agents/welcome/prompt.md`:
- Around line 7-8: The opener guidance in the prompt that instructs "Open by
referencing one specific thing (their name, what they do, where they are)" and
the phrase about using a `### PROFILE.md` block should be updated to avoid
suggesting location-level references; change the examples/guide to only
recommend low-sensitivity traits (e.g., name, role, professional headline, or
LinkedIn if user provided) and remove or strike the clause "where they are" so
the model does not surface location PII in unsolicited messages; ensure the `###
PROFILE.md` instruction still encourages referencing a single specific item but
explicitly prohibits mentioning location unless the user has already disclosed
it.
In `@src/openhuman/agent/schemas.rs`:
- Around line 437-451: Add unit and JSON-RPC tests to cover both branches of
handle_spawn_welcome: a unit test that calls handle_spawn_welcome (or the
internal function it delegates to) to assert returned payloads for spawned:
false and spawned: true, and update tests/json_rpc_e2e.rs to exercise the
"spawn_welcome" JSON-RPC method end-to-end for both outcomes, asserting the RPC
response body contains spawned: false in the skip path and spawned: true plus
the dispatched message in the dispatch path; locate the handler via
handle_spawn_welcome and the RPC entrypoint that maps the "spawn_welcome"
controller schema (see all_controller_schemas /
controller_schema_inventory_is_stable) and reuse existing test harness patterns
for invoking JSON-RPC requests and inspecting responses.
- Around line 273-285: The current logic only skips proactive welcome when
config.chat_onboarding_completed is true, allowing spawn even if the UI wizard
(config.onboarding_completed) is incomplete; update the guard so you only call
crate::openhuman::agent::welcome_proactive::spawn_proactive_welcome(config) when
both chat_onboarding_completed is false AND onboarding_completed is true.
Concretely, change the conditional around config to check
config.chat_onboarding_completed || !config.onboarding_completed and return the
same RpcOutcome (with an explanatory message like "proactive welcome skipped:
onboarding not completed") when either is true, otherwise proceed to
spawn_proactive_welcome.
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 231-233: The proactive runner currently trusts the model's
instruction-only prompt to limit tool usage; update the tool-dispatch logic in
the proactive runner (the code that invokes tools from welcome_proactive.rs) to
enforce an explicit allowlist: accept only the exact tool name
"check_onboarding_status" with no args and reject any other tool call (including
"complete_onboarding") before sending or executing it; implement a guard in the
function that parses/dispatches model tool requests (e.g., wherever tool names
are matched/invoked) to validate the tool name and arguments, log and return an
error if the request is not exactly the allowed call, and ensure no downstream
code path can bypass this check.
In `@src/openhuman/integrations/parallel.rs`:
- Around line 1001-1009: The test function
search_response_defaults_missing_cost_usd_to_zero only asserts cost_usd; update
it to also assert the newly-defaulted fields on SearchResponse: verify
resp.title equals the expected default (e.g., empty string ""),
resp.publish_date equals the expected default (e.g., None or an empty string
depending on the SearchResponse type), and resp.excerpts is the default empty
collection (e.g., an empty vec). Locate these fields on the SearchResponse
struct and add the corresponding assert_eq! / assert! checks alongside the
existing assertions so the test pins title, publish_date, excerpts, and cost_usd
defaults.
---
Nitpick comments:
In `@app/src-tauri/src/whatsapp_scanner/mod.rs`:
- Around line 37-39: Replace the local CDP_PORT constant in whatsapp_scanner
(const CDP_PORT: u16 = 19222) with a reference to the single shared
constant/config used by other CDP modules (e.g., use the existing cdp::CDP_PORT
or the shared constant in lib.rs) so there is one source of truth; remove the
duplicate hardcoded 19222 and update any uses in whatsapp_scanner::... to import
and use the shared symbol instead.
In `@app/src/pages/Accounts.tsx`:
- Line 5: Remove the commented-out RespondQueuePanel import and the other
commented blocks in Accounts.tsx (the temporary-disabled UI code) instead of
leaving them in place; locate the commented import for RespondQueuePanel and the
commented JSX blocks around the accounts component (the sections you noted
previously) and delete those comment blocks, or replace them with a small
feature flag/prop (e.g., renderRespondQueue boolean) if you need a runtime
toggle, relying on git history to restore the code when needed.
In `@app/src/pages/Conversations.tsx`:
- Around line 886-907: The header rendering is already guarded by the outer
condition (!isSidebar && !welcomeLocked), so remove the redundant inner checks
that re-evaluate !welcomeLocked around the button and the trailing block; locate
the conditional wrapper that renders the header (uses isSidebar and
welcomeLocked) and delete the inner duplicated "!welcomeLocked && (...)"
wrappers around the menu button (onClick => setShowSidebar) and the trailing
block that currently repeats the guard, leaving the button, h3
(threads.find(...).title), TokenUsagePill, and other children directly rendered
inside the outer block.
In `@app/src/pages/onboarding/components/WebviewLoginModal.tsx`:
- Around line 149-170: The debug log in the onLoad handler prints the full
payload.url which may contain sensitive OAuth params; update the onLoad
console.debug (and optionally the onNavigate one) to avoid logging full URLs by
redacting query/fragment or replacing payload.url with a redactedUrl (e.g.,
strip search/hash or log only origin and pathname) before passing it to
console.debug; locate the listeners created with
listen<'webview-account:load'|'webview-account:navigate'>, the handlers that
reference payload.url, and use a sanitized value instead of payload.url while
keeping matchesLoggedIn(payload.url) unchanged and still calling
setAutoDetected(true) when appropriate.
- Around line 61-248: Add unit tests for the WebviewLoginModal component: create
app/src/pages/onboarding/components/WebviewLoginModal.test.tsx that
mounts/unmounts WebviewLoginModal and verifies (1) dispatch of addAccount on
mount and removeAccount/purgeWebviewAccount behavior on unmount for both purge
and keepAliveOnConnected paths, (2) onConnected is called when the component
sets autoDetected (simulate listen events that yield URLs matching
LOGGED_IN_URL_PREFIX for the provider), (3) clicking the "I'm already signed in"
button sets connectedRef and calls onConnected with the generated accountId, and
(4) listener cleanup is safe under React StrictMode double-mount by mocking
`@tauri-apps/api/event.listen` to return controllable unlisten functions that may
return a Promise or throw; mock purgeWebviewAccount and the Redux store (or use
a test provider) and assert safeUnlisten behavior and that no unhandled promise
rejects occur.
In `@app/src/pages/onboarding/OnboardingLayout.tsx`:
- Around line 31-39: Wrap the call to setOnboardingTasks in a try/catch to match
the error-handling pattern used by setOnboardingCompletedFlag: catch any thrown
error when calling setOnboardingTasks, log a descriptive error message including
the caught error (so debugging info is preserved), and either rethrow or return
a controlled failure value consistent with the surrounding flow; update the
block around setOnboardingTasks to mirror the error handling approach used in
setOnboardingCompletedFlag so callers see a consistent contract.
- Line 67: The dependency array for the effect that defines completeAndExit
currently includes the whole snapshot object which can cause stale-closure or
unnecessary re-runs; change it to depend only on the specific value used
(snapshot.localState.onboardingTasks?.accessibilityPermissionGranted) or read
that boolean inside completeAndExit from a stable ref at call time. Concretely,
in OnboardingLayout.tsx locate completeAndExit and replace snapshot in the
dependency array with a derived constant like accessibilityPermissionGranted =
snapshot.localState.onboardingTasks?.accessibilityPermissionGranted (and use
that in the deps) or maintain a snapshotRef that you update and read from inside
completeAndExit so the function always sees the latest
onboardingTasks.accessibilityPermissionGranted without depending on the entire
snapshot object.
In `@app/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsx`:
- Around line 10-53: Add a test in ContextGatheringStep.test.tsx that verifies a
failing tools_composio_execute (mocked via callCoreRpc.mockRejectedValueOnce(new
Error('Gmail session expired'))) surfaces the error and keeps the UI actionable:
render <ContextGatheringStep connectedSources={['composio:gmail']} ...>, click
the "Let's go!" button, await the error text 'Gmail session expired' to appear,
and assert the 'Continue' button is still present so the user can retry or
proceed; use the existing callCoreRpc mock and the test helpers
(renderWithProviders, fireEvent, waitFor) to implement this.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b8573dcd-91a4-4e61-abdf-eb192d282e2a
⛔ Files ignored due to path filters (1)
app/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (49)
app/src-tauri/Cargo.tomlapp/src-tauri/permissions/allow-core-process.tomlapp/src-tauri/src/cdp/input.rsapp/src-tauri/src/cdp/mod.rsapp/src-tauri/src/cdp/snapshot.rsapp/src-tauri/src/gmail/mod.rsapp/src-tauri/src/gmail/reads.rsapp/src-tauri/src/lib.rsapp/src-tauri/src/webview_accounts/mod.rsapp/src-tauri/src/whatsapp_scanner/mod.rsapp/src/App.tsxapp/src/AppRoutes.tsxapp/src/components/DefaultRedirect.tsxapp/src/components/OnboardingOverlay.tsxapp/src/components/ProtectedRoute.tsxapp/src/components/__tests__/OnboardingOverlay.test.tsxapp/src/components/settings/SettingsHome.tsxapp/src/features/privacy/WhatLeavesMyComputerSheet.test.tsxapp/src/features/privacy/WhatLeavesMyComputerSheet.tsxapp/src/features/privacy/whatLeavesItems.tsapp/src/lib/coreState/__tests__/store.test.tsapp/src/pages/Accounts.tsxapp/src/pages/Conversations.tsxapp/src/pages/onboarding/Onboarding.tsxapp/src/pages/onboarding/OnboardingContext.tsxapp/src/pages/onboarding/OnboardingLayout.tsxapp/src/pages/onboarding/components/BetaBanner.tsxapp/src/pages/onboarding/components/WebviewLoginModal.tsxapp/src/pages/onboarding/pages/ChatProviderPage.tsxapp/src/pages/onboarding/pages/ContextPage.tsxapp/src/pages/onboarding/pages/SkillsPage.test.tsxapp/src/pages/onboarding/pages/SkillsPage.tsxapp/src/pages/onboarding/pages/WelcomePage.tsxapp/src/pages/onboarding/steps/ContextGatheringStep.tsxapp/src/pages/onboarding/steps/SkillsStep.test.tsxapp/src/pages/onboarding/steps/SkillsStep.tsxapp/src/pages/onboarding/steps/WelcomeStep.tsxapp/src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsxapp/src/pages/onboarding/steps/__tests__/WelcomeStep.test.tsxsrc/openhuman/agent/agents/welcome/prompt.mdsrc/openhuman/agent/prompts/mod.rssrc/openhuman/agent/schemas.rssrc/openhuman/agent/welcome_proactive.rssrc/openhuman/config/ops.rssrc/openhuman/integrations/parallel.rssrc/openhuman/learning/linkedin_enrichment.rssrc/openhuman/learning/schemas.rssrc/openhuman/tools/impl/network/web_search.rssrc/openhuman/tools/schemas.rs
💤 Files with no reviewable changes (3)
- app/src/features/privacy/WhatLeavesMyComputerSheet.tsx
- app/src/components/OnboardingOverlay.tsx
- app/src/components/tests/OnboardingOverlay.test.tsx
| /// Remote debugging host — matches `--remote-debugging-port=19222` in | ||
| /// `lib.rs`. Kept as constants so scanners and the session opener | ||
| /// agree. Port was 9222 originally but collided with ollama's | ||
| /// `127.0.0.1:9222` listener (silent CDP-attach failure → blank | ||
| /// child webviews). If you change either constant, update both. | ||
| pub const CDP_HOST: &str = "127.0.0.1"; | ||
| pub const CDP_PORT: u16 = 9222; | ||
| pub const CDP_PORT: u16 = 19222; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check that whatsapp_scanner uses CDP_PORT from the cdp module, not a hardcoded value
rg -n 'CDP_PORT|19222|9222' --type rust app/src-tauri/src/whatsapp_scanner/Repository: tinyhumansai/openhuman
Length of output: 703
Consolidate CDP_PORT to avoid duplicate constant definitions.
The whatsapp_scanner/mod.rs file defines its own CDP_PORT constant (line 39) rather than importing from cdp::CDP_PORT. While the comment acknowledges it should match cdp::CDP_PORT, the code doesn't enforce this. The cdp/mod.rs doc comment explicitly states constants are "kept as constants so scanners and the session opener agree" and warns to "update both" if changed, but duplicate definitions create a desync risk.
Import CDP_PORT from the cdp module in whatsapp_scanner/mod.rs instead of defining a local duplicate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src-tauri/src/cdp/mod.rs` around lines 29 - 35, The whatsapp_scanner
module defines its own CDP_PORT constant which can drift from the canonical
cdp::CDP_PORT; remove the local CDP_PORT declaration in whatsapp_scanner/mod.rs
and import the canonical constant from the cdp module (e.g., use
crate::cdp::CDP_PORT or equivalent) wherever the local constant was referenced
so the scanner and session opener share the same single source of truth.
| let resp = client | ||
| .post(&url) | ||
| .header("Content-Type", "application/json") | ||
| .json(&body) | ||
| .send() | ||
| .await | ||
| .map_err(|e| format!("POST {url} failed: {e}"))?; | ||
| let status = resp.status(); | ||
| if !status.is_success() { | ||
| let txt = resp.text().await.unwrap_or_default(); | ||
| return Err(format!("core returned {status}: {txt}")); | ||
| } | ||
| Ok(()) |
There was a problem hiding this comment.
Parse the JSON-RPC body before treating this call as success.
This only checks resp.status(). If the core returns HTTP 200 with a JSON-RPC error payload, onboarding will think the welcome agent was spawned even though the RPC failed.
Suggested fix
+ #[derive(serde::Deserialize)]
+ struct JsonRpcError {
+ code: i64,
+ message: String,
+ }
+
+ #[derive(serde::Deserialize)]
+ struct JsonRpcResponse {
+ error: Option<JsonRpcError>,
+ }
+
let resp = client
.post(&url)
.header("Content-Type", "application/json")
.json(&body)
.send()
@@
if !status.is_success() {
let txt = resp.text().await.unwrap_or_default();
return Err(format!("core returned {status}: {txt}"));
}
+ let rpc: JsonRpcResponse = resp
+ .json()
+ .await
+ .map_err(|e| format!("decode JSON-RPC response: {e}"))?;
+ if let Some(error) = rpc.error {
+ return Err(format!("openhuman.agent_spawn_welcome failed ({}): {}", error.code, error.message));
+ }
Ok(())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let resp = client | |
| .post(&url) | |
| .header("Content-Type", "application/json") | |
| .json(&body) | |
| .send() | |
| .await | |
| .map_err(|e| format!("POST {url} failed: {e}"))?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let txt = resp.text().await.unwrap_or_default(); | |
| return Err(format!("core returned {status}: {txt}")); | |
| } | |
| Ok(()) | |
| #[derive(serde::Deserialize)] | |
| struct JsonRpcError { | |
| code: i64, | |
| message: String, | |
| } | |
| #[derive(serde::Deserialize)] | |
| struct JsonRpcResponse { | |
| error: Option<JsonRpcError>, | |
| } | |
| let resp = client | |
| .post(&url) | |
| .header("Content-Type", "application/json") | |
| .json(&body) | |
| .send() | |
| .await | |
| .map_err(|e| format!("POST {url} failed: {e}"))?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let txt = resp.text().await.unwrap_or_default(); | |
| return Err(format!("core returned {status}: {txt}")); | |
| } | |
| let rpc: JsonRpcResponse = resp | |
| .json() | |
| .await | |
| .map_err(|e| format!("decode JSON-RPC response: {e}"))?; | |
| if let Some(error) = rpc.error { | |
| return Err(format!("openhuman.agent_spawn_welcome failed ({}): {}", error.code, error.message)); | |
| } | |
| Ok(()) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src-tauri/src/lib.rs` around lines 354 - 366, The current post request
handling only checks HTTP status (resp.status()) but must also parse the
JSON-RPC body and treat any JSON-RPC "error" as a failure; after receiving resp
from client.post(...).send().await (the block that produces resp and status),
parse the body into JSON (e.g., serde_json::Value via resp.json().await or
resp.text().await then parse) and inspect the JSON-RPC fields: if there is an
"error" field (or no valid "result" when expected) return Err with a message
including that error payload; only return Ok(()) when the HTTP status is success
and the parsed JSON-RPC indicates no error. Ensure to preserve existing map_err
formatting for the request failure path.
| {/* Respond queue side panel hidden for now — bring back when | ||
| the cross-provider surface is ready to ship. */} | ||
| {/* <RespondQueuePanel | ||
| items={respondQueue} | ||
| count={respondQueueCount} | ||
| status={respondQueueStatus} | ||
| error={respondQueueError} | ||
| onRefresh={() => { | ||
| void dispatch(fetchRespondQueue()); | ||
| }} | ||
| /> | ||
| /> */} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Accounts page tests:"
fd -i 'Accounts*.test.tsx' app/src || true
echo
echo "Queue-related assertions in Accounts tests:"
rg -n --iglob '*Accounts*.test.tsx' 'RespondQueuePanel|providerSurfaces|fetchRespondQueue|queue' app/src || trueRepository: tinyhumansai/openhuman
Length of output: 132
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Searching for ANY test files with 'Accounts' in name ==="
find app/src -type f -name '*[Aa]ccounts*test*' 2>/dev/null || echo "No matches"
find app/src -type f -name '*test*[Aa]ccounts*' 2>/dev/null || echo "No matches"
echo
echo "=== Checking Accounts.tsx existence and line count ==="
if [ -f "app/src/pages/Accounts.tsx" ]; then
wc -l app/src/pages/Accounts.tsx
echo
echo "=== Lines 240-270 of Accounts.tsx ==="
sed -n '240,270p' app/src/pages/Accounts.tsx
else
echo "Accounts.tsx not found"
fi
echo
echo "=== Checking for all test files in app/src/pages ==="
ls -la app/src/pages/*.test.tsx 2>/dev/null || echo "No .test.tsx files in pages/"Repository: tinyhumansai/openhuman
Length of output: 1548
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking accountsSlice tests for queue coverage ==="
echo "accountsSlice.core.test.ts:"
grep -n 'respondQueue\|RespondQueue\|queue' app/src/store/__tests__/accountsSlice.core.test.ts || echo " (no matches)"
echo
echo "accountsSlice.webviewNotifications.test.ts:"
grep -n 'respondQueue\|RespondQueue\|queue' app/src/store/__tests__/accountsSlice.webviewNotifications.test.ts || echo " (no matches)"
echo
echo "=== Checking for RespondQueuePanel component and tests ==="
find app/src -type f -name '*RespondQueue*' 2>/dev/null | head -20
echo
echo "=== Checking what fetchRespondQueue is (from accountsSlice) ==="
rg -n 'export.*fetchRespondQueue|const.*fetchRespondQueue' app/src/store/ -A 3 | head -40
echo
echo "=== Git blame for Accounts.tsx line 252 (RespondQueuePanel disabled) ==="
git blame -L 252,252 app/src/pages/Accounts.tsx 2>/dev/null || echo "(git blame not available)"Repository: tinyhumansai/openhuman
Length of output: 1959
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking for RespondQueuePanel test file ==="
find app/src -type f -name '*RespondQueuePanel*.test*'
echo
echo "=== RespondQueuePanel.tsx structure (first 50 lines) ==="
head -50 app/src/components/accounts/RespondQueuePanel.tsx
echo
echo "=== Checking if RespondQueuePanel is tested elsewhere ==="
rg -l 'RespondQueuePanel' app/src/**/*.test.tsx app/src/**/*.test.ts 2>/dev/null || echo "(not found in any test)"
echo
echo "=== Git log for RespondQueuePanel.tsx (recent commits) ==="
git log --oneline app/src/components/accounts/RespondQueuePanel.tsx 2>/dev/null | head -10
echo
echo "=== Checking related imports in Accounts.tsx around line 252 ==="
head -50 app/src/pages/Accounts.tsx | grep -E 'import|RespondQueue'
</parameter>Repository: tinyhumansai/openhuman
Length of output: 3067
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking all uses of respondQueue state in Accounts.tsx ==="
rg -n 'respondQueue' app/src/pages/Accounts.tsx
echo
echo "=== Checking if these are used in the commented section or elsewhere ==="
sed -n '1,310p' app/src/pages/Accounts.tsx | grep -n 'respondQueue' || true
echo
echo "=== Checking Accounts.tsx for full context of disabled panel ==="
sed -n '245,275p' app/src/pages/Accounts.tsxRepository: tinyhumansai/openhuman
Length of output: 2415
RespondQueuePanel component needs unit test coverage.
The RespondQueuePanel component (app/src/components/accounts/RespondQueuePanel.tsx) is new but has no corresponding unit test. Add app/src/components/accounts/RespondQueuePanel.test.tsx with Vitest + testing-library/react covering: component renders the queue title and pending count, refresh button triggers onRefresh, items are displayed, status/error states render correctly.
Note: The queue selectors (respondQueue, respondQueueCount, etc.) and the panel rendering itself are commented out in Accounts, so the disabled behavior does not need explicit rendering test. However, the new component must have unit test coverage before shipping per coding guidelines. The Redux slice tests in providerSurfaceSlice.test.ts already cover fetchRespondQueue state transitions, so no additional slice test is needed. Page-level integration test for the full queue flow can be deferred to a follow-up with documented rationale (similar to PR #833, #888).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/pages/Accounts.tsx` around lines 252 - 262, Add unit tests for the
new RespondQueuePanel component by creating
app/src/components/accounts/RespondQueuePanel.test.tsx using Vitest +
`@testing-library/react`: write tests that render RespondQueuePanel with props
(items, count, status, error) and assert the queue title and pending count
appear, that each item in the items prop is displayed, that clicking the refresh
button calls the onRefresh prop (use a jest/vitest mock), and that different
status/error prop values render their respective UI states; reference the
RespondQueuePanel component and its onRefresh prop to locate the component and
ensure tests cover render, interaction, and status/error branches.
| const handleFinish = async () => { | ||
| setError(null); | ||
| setLoading(true); | ||
| try { | ||
| await completeAndExit(); | ||
| } catch (e) { | ||
| setError(e instanceof Error ? e.message : 'Could not finish onboarding.'); | ||
| setLoading(false); | ||
| } |
There was a problem hiding this comment.
Reset loading in finally to prevent sticky finish state.
If completeAndExit() resolves but navigation does not happen immediately, loading can remain true and block retries.
Suggested fix
const handleFinish = async () => {
+ if (loading) return;
setError(null);
setLoading(true);
try {
await completeAndExit();
} catch (e) {
setError(e instanceof Error ? e.message : 'Could not finish onboarding.');
- setLoading(false);
+ } finally {
+ setLoading(false);
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleFinish = async () => { | |
| setError(null); | |
| setLoading(true); | |
| try { | |
| await completeAndExit(); | |
| } catch (e) { | |
| setError(e instanceof Error ? e.message : 'Could not finish onboarding.'); | |
| setLoading(false); | |
| } | |
| const handleFinish = async () => { | |
| if (loading) return; | |
| setError(null); | |
| setLoading(true); | |
| try { | |
| await completeAndExit(); | |
| } catch (e) { | |
| setError(e instanceof Error ? e.message : 'Could not finish onboarding.'); | |
| } finally { | |
| setLoading(false); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/pages/onboarding/pages/ChatProviderPage.tsx` around lines 19 - 27,
The handleFinish function can leave loading stuck true when completeAndExit
resolves but navigation delays; update handleFinish (the async function using
setLoading, setError, and completeAndExit) to ensure setLoading(false) runs in a
finally block so loading is always cleared regardless of success or error, and
remove the standalone setLoading(false) from the catch so the finally is the
single place that resets loading.
| // Short-circuit: if PROFILE.md is already on disk from a previous | ||
| // enrichment run, skip the entire pipeline. The welcome agent reads | ||
| // PROFILE.md straight from the workspace, so re-running stages 1-3 | ||
| // would just churn quota for the same output. | ||
| let profile_path = config.workspace_dir.join("PROFILE.md"); | ||
| if profile_path.is_file() { | ||
| tracing::info!( | ||
| path = %profile_path.display(), | ||
| "[linkedin_enrichment] PROFILE.md already exists — skipping pipeline" | ||
| ); | ||
| result | ||
| .log | ||
| .push("PROFILE.md already exists — skipping enrichment.".into()); | ||
| for id in ["gmail-search", "apify-scrape", "build-profile"] { | ||
| result.stages.push(EnrichmentStage { | ||
| id: "gmail-search".into(), | ||
| id: id.into(), | ||
| status: StageStatus::Skipped, | ||
| detail: Some("No LinkedIn profile URL found in emails".into()), | ||
| detail: Some("PROFILE.md already on disk".into()), | ||
| }); | ||
| None | ||
| } | ||
| Err(e) => { | ||
| tracing::warn!(error = %e, "[linkedin_enrichment] Gmail search failed"); | ||
| result.log.push(format!("Gmail search failed: {e}")); | ||
| result.stages.push(EnrichmentStage { | ||
| id: "gmail-search".into(), | ||
| status: StageStatus::Failed, | ||
| detail: Some(format!("Gmail search failed: {e}")), | ||
| }); | ||
| None | ||
| return Ok(result); | ||
| } |
There was a problem hiding this comment.
Don’t treat any existing PROFILE.md as a completed enrichment.
This guard also suppresses retries after the fallback path writes a URL-only PROFILE.md on scrape failure. A transient Apify outage now leaves the user permanently stuck with the minimal placeholder, because every later run exits here before attempting stages 1-3 again.
Only short-circuit when the previous run actually produced a full profile, or persist a marker that distinguishes the fallback file from a successful enrichment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/learning/linkedin_enrichment.rs` around lines 94 - 115, The
current guard treats any existing PROFILE.md at config.workspace_dir as a
successful enrichment and skips stages, which prevents retries when a fallback
wrote a URL-only placeholder; change the check in the profile_path short-circuit
to verify that PROFILE.md is a full/successful output (e.g., by validating file
contents/size or checking for a persisted success marker) before skipping:
update the logic around profile_path.is_file() where PROFILE.md presence is
inspected and only push Skipped EnrichmentStage entries and return Ok(result) if
the file passes the “full profile” test (otherwise allow pipeline stages
"gmail-search", "apify-scrape", "build-profile" to run); alternatively write and
check a separate marker alongside PROFILE.md to distinguish fallback
placeholders from completed enrichments so retries are not suppressed.
| let preset_profile_url = params | ||
| .get("profile_url") | ||
| .and_then(Value::as_str) | ||
| .map(str::to_string); |
There was a problem hiding this comment.
Treat an empty profile_url as None.
This new optional input currently turns "" into Some(String::new()). That can skip the Gmail-discovery path and feed an invalid URL into run_linkedin_enrichment. Trim/filter empties here the same way handle_apify_linkedin_scrape already does.
Suggested fix
let preset_profile_url = params
.get("profile_url")
.and_then(Value::as_str)
- .map(str::to_string);
+ .map(str::trim)
+ .filter(|s| !s.is_empty())
+ .map(str::to_string);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let preset_profile_url = params | |
| .get("profile_url") | |
| .and_then(Value::as_str) | |
| .map(str::to_string); | |
| let preset_profile_url = params | |
| .get("profile_url") | |
| .and_then(Value::as_str) | |
| .map(str::trim) | |
| .filter(|s| !s.is_empty()) | |
| .map(str::to_string); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/learning/schemas.rs` around lines 169 - 172, The code currently
maps an empty "profile_url" into Some("") which bypasses Gmail discovery and can
pass an invalid URL into run_linkedin_enrichment; change the extraction for
preset_profile_url (the params.get("profile_url") chain) to trim the string and
return None for empty/whitespace-only values (same approach as
handle_apify_linkedin_scrape) so only non-empty trimmed URLs become Some(String)
and empties become None.
| pub fn all_controller_schemas() -> Vec<ControllerSchema> { | ||
| Vec::new() | ||
| vec![ | ||
| tools_schemas("tools_composio_execute"), | ||
| tools_schemas("tools_web_search"), | ||
| tools_schemas("tools_apify_linkedin_scrape"), | ||
| ] | ||
| } | ||
|
|
||
| pub fn all_registered_controllers() -> Vec<RegisteredController> { | ||
| Vec::new() | ||
| vec![ | ||
| RegisteredController { | ||
| schema: tools_schemas("tools_composio_execute"), | ||
| handler: handle_composio_execute, | ||
| }, | ||
| RegisteredController { | ||
| schema: tools_schemas("tools_web_search"), | ||
| handler: handle_web_search, | ||
| }, | ||
| RegisteredController { | ||
| schema: tools_schemas("tools_apify_linkedin_scrape"), | ||
| handler: handle_apify_linkedin_scrape, | ||
| }, | ||
| ] | ||
| } | ||
|
|
||
| pub fn tools_schemas(function: &str) -> ControllerSchema { | ||
| match function { | ||
| "tools_composio_execute" => ControllerSchema { | ||
| namespace: "tools", | ||
| function: "composio_execute", | ||
| description: "Execute a Composio action via the backend proxy. Thin wrapper \ | ||
| around `ComposioClient::execute_tool` exposed for Tauri-driven \ | ||
| flows (e.g. onboarding) that orchestrate tool calls themselves.", | ||
| inputs: vec![ | ||
| FieldSchema { | ||
| name: "action", | ||
| ty: TypeSchema::String, | ||
| comment: "Composio action slug (e.g. `GMAIL_FETCH_EMAILS`).", | ||
| required: true, | ||
| }, | ||
| FieldSchema { | ||
| name: "params", | ||
| ty: TypeSchema::Json, | ||
| comment: "Action parameters object passed straight through to Composio.", | ||
| required: false, | ||
| }, | ||
| ], | ||
| outputs: vec![ | ||
| FieldSchema { | ||
| name: "successful", | ||
| ty: TypeSchema::Bool, | ||
| comment: "Whether the upstream provider reported success.", | ||
| required: true, | ||
| }, | ||
| FieldSchema { | ||
| name: "data", | ||
| ty: TypeSchema::Json, | ||
| comment: "Raw provider response.", | ||
| required: true, | ||
| }, | ||
| FieldSchema { | ||
| name: "error", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::String)), | ||
| comment: "Provider error message if `successful` is false.", | ||
| required: false, | ||
| }, | ||
| ], | ||
| }, | ||
| "tools_web_search" => ControllerSchema { | ||
| namespace: "tools", | ||
| function: "web_search", | ||
| description: "Web search via the backend Parallel proxy. Returns structured \ | ||
| results so callers can inspect titles, URLs, and excerpts \ | ||
| without parsing the agent-facing pretty text.", | ||
| inputs: vec![ | ||
| FieldSchema { | ||
| name: "query", | ||
| ty: TypeSchema::String, | ||
| comment: "Search query string.", | ||
| required: true, | ||
| }, | ||
| FieldSchema { | ||
| name: "objective", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::String)), | ||
| comment: "Optional objective sent to Parallel (defaults to `query`).", | ||
| required: false, | ||
| }, | ||
| FieldSchema { | ||
| name: "max_results", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::U64)), | ||
| comment: "Max results (1-10, default 5).", | ||
| required: false, | ||
| }, | ||
| FieldSchema { | ||
| name: "timeout_secs", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::U64)), | ||
| comment: "Request timeout in seconds (default 15).", | ||
| required: false, | ||
| }, | ||
| ], | ||
| outputs: vec![FieldSchema { | ||
| name: "results", | ||
| ty: TypeSchema::Array(Box::new(TypeSchema::Json)), | ||
| comment: "Each item: {url, title, publish_date?, excerpts[]}.", | ||
| required: true, | ||
| }], | ||
| }, | ||
| "tools_apify_linkedin_scrape" => ControllerSchema { | ||
| namespace: "tools", | ||
| function: "apify_linkedin_scrape", | ||
| description: "Run the Apify LinkedIn profile scraper actor on a single profile \ | ||
| URL and return both the raw scraped item and a pre-rendered \ | ||
| markdown view of it (same layout as the legacy enrichment pipeline).", | ||
| inputs: vec![FieldSchema { | ||
| name: "profile_url", | ||
| ty: TypeSchema::String, | ||
| comment: "Canonical LinkedIn profile URL (`https://www.linkedin.com/in/<slug>`).", | ||
| required: true, | ||
| }], | ||
| outputs: vec![ | ||
| FieldSchema { | ||
| name: "data", | ||
| ty: TypeSchema::Json, | ||
| comment: "Raw scraped profile JSON from Apify.", | ||
| required: true, | ||
| }, | ||
| FieldSchema { | ||
| name: "markdown", | ||
| ty: TypeSchema::String, | ||
| comment: "Markdown rendering of the scraped profile (full, pre-summary).", | ||
| required: true, | ||
| }, | ||
| ], | ||
| }, | ||
| _ => ControllerSchema { | ||
| namespace: "tools", | ||
| function: "unknown", | ||
| description: "Unknown tools controller.", | ||
| inputs: vec![], | ||
| outputs: vec![FieldSchema { | ||
| name: "error", | ||
| ty: TypeSchema::String, | ||
| comment: "Lookup error details.", | ||
| required: true, | ||
| }], | ||
| }, | ||
| } | ||
| } | ||
|
|
||
| fn handle_composio_execute(params: Map<String, Value>) -> ControllerFuture { | ||
| Box::pin(async move { | ||
| let action = params | ||
| .get("action") | ||
| .and_then(Value::as_str) | ||
| .map(str::to_string) | ||
| .ok_or_else(|| "missing required `action`".to_string())?; | ||
| let action_args = params.get("params").cloned(); | ||
|
|
||
| let config = config_rpc::load_config_with_timeout().await?; | ||
| let client = crate::openhuman::composio::client::build_composio_client(&config) | ||
| .ok_or_else(|| { | ||
| "composio client unavailable — user not signed in to backend".to_string() | ||
| })?; | ||
|
|
||
| let resp = client | ||
| .execute_tool(&action, action_args) | ||
| .await | ||
| .map_err(|e| format!("composio execute_tool failed: {e:#}"))?; | ||
|
|
||
| let payload = json!({ | ||
| "successful": resp.successful, | ||
| "data": resp.data, | ||
| "error": resp.error, | ||
| "cost_usd": resp.cost_usd, | ||
| }); | ||
| let log = vec![format!( | ||
| "tools.composio_execute: action={action} successful={}", | ||
| resp.successful | ||
| )]; | ||
| RpcOutcome::new(payload, log).into_cli_compatible_json() | ||
| }) | ||
| } | ||
|
|
||
| fn handle_web_search(params: Map<String, Value>) -> ControllerFuture { | ||
| Box::pin(async move { | ||
| let query = params | ||
| .get("query") | ||
| .and_then(Value::as_str) | ||
| .map(str::trim) | ||
| .filter(|s| !s.is_empty()) | ||
| .map(str::to_string) | ||
| .ok_or_else(|| "missing or empty `query`".to_string())?; | ||
| let objective = params | ||
| .get("objective") | ||
| .and_then(Value::as_str) | ||
| .map(str::to_string) | ||
| .unwrap_or_else(|| query.clone()); | ||
| let max_results = params | ||
| .get("max_results") | ||
| .and_then(Value::as_u64) | ||
| .map(|n| n.clamp(1, 10) as usize) | ||
| .unwrap_or(5); | ||
| let timeout_secs = params | ||
| .get("timeout_secs") | ||
| .and_then(Value::as_u64) | ||
| .map(|n| n.max(1)) | ||
| .unwrap_or(15); | ||
|
|
||
| let config = config_rpc::load_config_with_timeout().await?; | ||
| let client = crate::openhuman::integrations::build_client(&config).ok_or_else(|| { | ||
| "web search unavailable — no backend session token. Sign in first.".to_string() | ||
| })?; | ||
|
|
||
| // Body matches `parallelSearchSchema` (backend-2/.../validators/agentIntegration.validator.ts). | ||
| // `timeout_secs` remains accepted in our RPC schema for compatibility | ||
| // with existing callers, but the upstream validator currently strips | ||
| // unknown keys and Parallel governs its own per-mode deadline. | ||
| let _ = timeout_secs; | ||
| let body = json!({ | ||
| "objective": objective, | ||
| "searchQueries": [query], | ||
| "mode": "fast", | ||
| "excerpts": { | ||
| "maxResults": max_results, | ||
| "maxCharsPerResult": 500 | ||
| } | ||
| }); | ||
|
|
||
| let resp = client | ||
| .post::<crate::openhuman::integrations::parallel::SearchResponse>( | ||
| "/agent-integrations/parallel/search", | ||
| &body, | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("parallel search failed: {e:#}"))?; | ||
|
|
||
| let count = resp.results.len(); | ||
| let payload = json!({ "results": resp.results }); | ||
| let log = vec![format!( | ||
| "tools.web_search: query=\"{query}\" results={count}" | ||
| )]; | ||
| RpcOutcome::new(payload, log).into_cli_compatible_json() | ||
| }) | ||
| } | ||
|
|
||
| fn handle_apify_linkedin_scrape(params: Map<String, Value>) -> ControllerFuture { | ||
| Box::pin(async move { | ||
| let profile_url = params | ||
| .get("profile_url") | ||
| .and_then(Value::as_str) | ||
| .map(str::trim) | ||
| .filter(|s| !s.is_empty()) | ||
| .map(str::to_string) | ||
| .ok_or_else(|| "missing or empty `profile_url`".to_string())?; | ||
|
|
||
| let config = config_rpc::load_config_with_timeout().await?; | ||
| let client = crate::openhuman::integrations::build_client(&config).ok_or_else(|| { | ||
| "Apify scrape unavailable — no backend session token. Sign in first.".to_string() | ||
| })?; | ||
|
|
||
| let data = crate::openhuman::learning::linkedin_enrichment::scrape_linkedin_profile( | ||
| &client, | ||
| &profile_url, | ||
| ) | ||
| .await | ||
| .map_err(|e| format!("Apify LinkedIn scrape failed: {e:#}"))?; | ||
|
|
||
| let markdown = crate::openhuman::learning::linkedin_enrichment::render_profile_markdown( | ||
| &profile_url, | ||
| &data, | ||
| ); | ||
|
|
||
| let payload = json!({ "data": data, "markdown": markdown }); | ||
| let log = vec![format!( | ||
| "tools.apify_linkedin_scrape: url={profile_url} markdown_chars={}", | ||
| markdown.chars().count() | ||
| )]; | ||
| RpcOutcome::new(payload, log).into_cli_compatible_json() | ||
| }) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Keep the tools domain on the standard schemas.rs/rpc.rs contract.
This module now contains the handler implementations directly and still doesn’t export the standard schemas() entrypoint. That makes tools the odd one out from the repo’s controller-registry pattern and makes future discovery/refactoring harder.
Please move the handler bodies into src/openhuman/tools/rpc.rs and keep schemas.rs as schema metadata plus delegation, with the standard schemas(), all_controller_schemas(), and all_registered_controllers() exports. Based on learnings, src/openhuman/**/schemas.rs should define schemas, all_controller_schemas, all_registered_controllers, and handle_* functions delegating to domain rpc.rs.
🧰 Tools
🪛 Betterleaks (1.1.2)
[high] 123-123: Discovered a LinkedIn Client secret, potentially compromising LinkedIn application integrations and user data.
(linkedin-client-secret)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tools/schemas.rs` around lines 16 - 295, The current
src/openhuman/tools/schemas.rs mixes schema metadata with full handler
implementations and lacks the canonical schemas() entrypoint; extract the async
handler bodies from handle_composio_execute, handle_web_search, and
handle_apify_linkedin_scrape into a new src/openhuman/tools/rpc.rs as functions
(same names), keep schemas.rs containing only tools_schemas,
all_controller_schemas, all_registered_controllers and add a public schemas()
entrypoint that returns the domain ControllerSchema list, and modify the
handle_* stubs in schemas.rs to delegate to the implementations in tools::rpc
(i.e., call the rpc::handle_composio_execute, rpc::handle_web_search,
rpc::handle_apify_linkedin_scrape) so the controller-registry pattern and
exports remain consistent.
| FieldSchema { | ||
| name: "objective", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::String)), | ||
| comment: "Optional objective sent to Parallel (defaults to `query`).", | ||
| required: false, | ||
| }, | ||
| FieldSchema { | ||
| name: "max_results", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::U64)), | ||
| comment: "Max results (1-10, default 5).", | ||
| required: false, | ||
| }, | ||
| FieldSchema { | ||
| name: "timeout_secs", | ||
| ty: TypeSchema::Option(Box::new(TypeSchema::U64)), | ||
| comment: "Request timeout in seconds (default 15).", | ||
| required: false, | ||
| }, |
There was a problem hiding this comment.
timeout_secs is documented but never enforced.
The schema says callers can bound the request time, but the handler throws the value away before making the Parallel call. That makes the new RPC contract misleading and can leave onboarding blocked longer than the renderer asked for.
Either enforce the timeout locally or remove the field until it’s real.
Suggested fix
let timeout_secs = params
.get("timeout_secs")
.and_then(Value::as_u64)
.map(|n| n.max(1))
.unwrap_or(15);
@@
- // Body matches `parallelSearchSchema` (backend-2/.../validators/agentIntegration.validator.ts).
- // `timeout_secs` remains accepted in our RPC schema for compatibility
- // with existing callers, but the upstream validator currently strips
- // unknown keys and Parallel governs its own per-mode deadline.
- let _ = timeout_secs;
+ // Body matches `parallelSearchSchema` (backend-2/.../validators/agentIntegration.validator.ts).
+ // Enforce our own outer timeout because the upstream validator strips
+ // unknown keys from the request body.
let body = json!({
"objective": objective,
"searchQueries": [query],
"mode": "fast",
"excerpts": {
"maxResults": max_results,
"maxCharsPerResult": 500
}
});
- let resp = client
- .post::<crate::openhuman::integrations::parallel::SearchResponse>(
- "/agent-integrations/parallel/search",
- &body,
- )
- .await
- .map_err(|e| format!("parallel search failed: {e:#}"))?;
+ let resp = tokio::time::timeout(
+ std::time::Duration::from_secs(timeout_secs),
+ client.post::<crate::openhuman::integrations::parallel::SearchResponse>(
+ "/agent-integrations/parallel/search",
+ &body,
+ ),
+ )
+ .await
+ .map_err(|_| format!("parallel search timed out after {timeout_secs}s"))?
+ .map_err(|e| format!("parallel search failed: {e:#}"))?;Also applies to: 218-250
| #[cfg(test)] | ||
| mod tests { | ||
| use super::*; | ||
|
|
||
| #[test] | ||
| fn all_schemas_returns_three() { | ||
| assert_eq!(all_controller_schemas().len(), 3); | ||
| } | ||
|
|
||
| #[test] | ||
| fn all_controllers_returns_three() { | ||
| assert_eq!(all_registered_controllers().len(), 3); | ||
| } | ||
|
|
||
| #[test] | ||
| fn apify_linkedin_scrape_schema_shape() { | ||
| let s = tools_schemas("tools_apify_linkedin_scrape"); | ||
| assert_eq!(s.namespace, "tools"); | ||
| assert_eq!(s.function, "apify_linkedin_scrape"); | ||
| assert!(s | ||
| .inputs | ||
| .iter() | ||
| .any(|f| f.name == "profile_url" && f.required)); | ||
| } | ||
|
|
||
| #[test] | ||
| fn composio_execute_schema_shape() { | ||
| let s = tools_schemas("tools_composio_execute"); | ||
| assert_eq!(s.namespace, "tools"); | ||
| assert_eq!(s.function, "composio_execute"); | ||
| assert!(s.inputs.iter().any(|f| f.name == "action" && f.required)); | ||
| } | ||
|
|
||
| #[test] | ||
| fn web_search_schema_shape() { | ||
| let s = tools_schemas("tools_web_search"); | ||
| assert_eq!(s.namespace, "tools"); | ||
| assert_eq!(s.function, "web_search"); | ||
| assert!(s.inputs.iter().any(|f| f.name == "query" && f.required)); | ||
| } | ||
|
|
||
| #[test] | ||
| fn unknown_function_returns_unknown() { | ||
| let s = tools_schemas("nonexistent"); | ||
| assert_eq!(s.function, "unknown"); | ||
| } | ||
| } |
There was a problem hiding this comment.
These tests don’t prove the new RPC surface yet.
The added coverage only checks schema metadata. The three new public controllers still need handler-level tests and tests/json_rpc_e2e.rs coverage for validation failures, auth failures, and success payload shapes before the renderer depends on them. As per coding guidelines, "Rust code must be unit-tested for new/changed behavior before stacking features; run via cargo test and extend tests/json_rpc_e2e.rs for RPC integration tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tools/schemas.rs` around lines 297 - 343, The new tests only
validate schema metadata; add handler-level unit tests and JSON-RPC e2e coverage
for the three new controllers (tools_apify_linkedin_scrape,
tools_composio_execute, tools_web_search): write unit tests that call each
controller's handler functions (the request/handler entrypoints that perform
validation and auth) to assert validation failure cases and auth failure paths,
and extend tests/json_rpc_e2e.rs to send real JSON-RPC requests for each
controller verifying (a) validation error responses for malformed/missing
inputs, (b) auth/permission denial responses, and (c) successful response
payload shapes; run cargo test to ensure all new tests pass.
- Added a finally block to ensure loading state is reset after attempting to clear app data, enhancing user experience during logout operations.
…vironment mapping - Replaced the existing `resolve_runtime_config_dirs` function with `resolve_runtime_config_dirs_with`, incorporating an environment mapping for improved configuration handling in tests.
Summary
/homeis ready.PROFILE.mdpipeline stage by stage from the app.Problem
Solution
OnboardingLayoutso the renderer owns when the final transition happens.spawn_welcome_agentcommand plusagent.spawn_welcomecore controller for fire-and-forget welcome dispatch once the app is ready.tools.composio_execute,tools.web_search,tools.apify_linkedin_scrape, andlearning.save_profilecontrollers so the app can drive the enrichment flow while keeping external calls and persistence in core.ContextGatheringStepto execute the Gmail lookup, LinkedIn scrape, and save/summarize stages explicitly with clearer stage state and coverage.Submission Checklist
pnpm exec vitest run --config test/vitest.config.ts src/pages/onboarding/steps/__tests__/ContextGatheringStep.test.tsxinapp/andcargo test --manifest-path Cargo.toml tools::schemas -- --nocaptureImpact
feat/friday-apr-24; no mobile targets are affected.Related
Summary by CodeRabbit
New Features
Bug Fixes
Improvements